pkgsrc-WIP-changes archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

py-distributed: remove accidentally added file



Module Name:	pkgsrc-wip
Committed By:	Matthew Danielson <matthewd%fastmail.us@localhost>
Pushed By:	matthewd
Date:		Fri Sep 1 18:46:55 2023 -0700
Changeset:	5c1ff0cc31637cd1a55b0b4ee73aa2246c2f1a62

Removed Files:
	py-distributed/test.out

Log Message:
py-distributed: remove accidentally added file

To see a diff of this commit:
https://wip.pkgsrc.org/cgi-bin/gitweb.cgi?p=pkgsrc-wip.git;a=commitdiff;h=5c1ff0cc31637cd1a55b0b4ee73aa2246c2f1a62

Please note that diffs are not public domain; they are subject to the
copyright notices on the relevant files.

diffstat:
 py-distributed/test.out | 49699 ----------------------------------------------
 1 file changed, 49699 deletions(-)

diffs:
diff --git a/py-distributed/test.out b/py-distributed/test.out
deleted file mode 100644
index b589694ce7..0000000000
--- a/py-distributed/test.out
+++ /dev/null
@@ -1,49699 +0,0 @@
-=> Bootstrap dependency digest>=20211023: found digest-20220214
-===> Skipping vulnerability checks.
-WARNING: No /home/matthew/pkgsrc/install.20220728/pkgdb/pkg-vulnerabilities file found.
-WARNING: To fix run: `/home/matthew/pkgsrc/install.20220728/sbin/pkg_admin -K /home/matthew/pkgsrc/install.20220728/pkgdb fetch-pkg-vulnerabilities'.
-=> Test dependency py310-lz4>=3.1.10: found py310-lz4-3.1.10nb1
-=> Test dependency py310-zstandard>=0.18.0: found py310-zstandard-0.18.0
-=> Test dependency py310-requests>=2.28.1: found py310-requests-2.28.1
-=> Test dependency py310-test-[0-9]*: found py310-test-7.1.2
-=> Test dependency py310-test-timeout-[0-9]*: found py310-test-timeout-2.1.0
-=> Test dependency py310-dask-2022.8.1: found py310-dask-2022.8.1
-=> Test dependency py310-ipywidgets>=7.7.0: found py310-ipywidgets-7.7.0
-===> Testing for py310-distributed-2022.8.1
-cd /home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1 && /bin/env USETOOLS=no PTHREAD_CFLAGS=\ -pthread\  PTHREAD_LDFLAGS=\ -pthread PTHREAD_LIBS=-lpthread\ -lrt PTHREADBASE=/usr DL_CFLAGS= DL_LDFLAGS= DL_LIBS= PYTHON=/home/matthew/pkgsrc/install.20220728/bin/python3.10 CC=cc CFLAGS=-O2\ -I/usr/include\ -I/home/matthew/pkgsrc/install.20220728/include/python3.10 CPPFLAGS=-I/usr/include\ -I/home/matthew/pkgsrc/install.20220728/include/python3.10 CXX=c++ CXXFLAGS=-O2\ -I/usr/include\ -I/home/matthew/pkgsrc/install.20220728/include/python3.10 COMPILER_RPATH_FLAG=-Wl,-R F77=gfortran FC=gfortran FFLAGS=-O LANG=C LC_ALL=C LC_COLLATE=C LC_CTYPE=C LC_MESSAGES=C LC_MONETARY=C LC_NUMERIC=C LC_TIME=C LDFLAGS=-L/usr/lib64\ -Wl,-R/usr/lib64\ -Wl,-R/home/matthew/pkgsrc/install.20220728/lib LINKER_RPATH_FLAG=-R PATH=/home/matthew/pkgsrc/work/wip/py-distributed/work/.cwrapper/bin:/home/matthew/pkgsrc/work/wip/py-distributed/work/.buildlink/bin:/home/matthew/pkgsrc/work/wi
p/py-distributed/work/.tools/bin:/home/matthew/pkgsrc/install.20220728/bin:/home/matthew/pkgsrc/install.20220728/bin:/home/matthew/pkgsrc/install.20220728/sbin:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/home/matthew/pkgsrc/install.20220728/bin:/home/matthew/pkgsrc/install.20220728/bin PREFIX=/home/matthew/pkgsrc/install.20220728 MAKELEVEL=0 CONFIG_SITE= PKG_SYSCONFDIR=/home/matthew/pkgsrc/install.20220728/etc CPP=cpp CXXCPP=cpp HOME=/home/matthew/pkgsrc/work/wip/py-distributed/work/.home CWRAPPERS_CONFIG_DIR=/home/matthew/pkgsrc/work/wip/py-distributed/work/.cwrapper/config CPP=cpp LOCALBASE=/home/matthew/pkgsrc/install.20220728 X11BASE=/home/matthew/pkgsrc/install.20220728 PKGMANDIR=man PKGINFODIR=info PKGGNUDIR=gnu/ MAKECONF=/dev/null OBJECT_FMT=ELF USETOOLS=no BSD_INSTALL_PROGRAM=/bin/install\ -c\ -s\ -o\ matthew\ -g\ matthew\ -m\ 755 BSD_INSTALL_SCRIPT=/bin/install\ -c\ -o\ matthew\ -g\ matthew\ -m\ 755 BSD_INSTALL_LIB=/bin/install\ -c\ -o\ matthew\ -g\ matthew\ -m
 \ 755 BSD_INSTALL_DATA=/bin/install\ -c\ -o\ matthew\ -g\ matthew\ -m\ 644 BSD_INSTALL_MAN=/bin/install\ -c\ -o\ matthew\ -g\ matthew\ -m\ 644 BSD_INSTALL=/bin/install BSD_INSTALL_PROGRAM_DIR=/bin/install\ -d\ -o\ matthew\ -g\ matthew\ -m\ 755 BSD_INSTALL_SCRIPT_DIR=/bin/install\ -d\ -o\ matthew\ -g\ matthew\ -m\ 755 BSD_INSTALL_LIB_DIR=/bin/install\ -d\ -o\ matthew\ -g\ matthew\ -m\ 755 BSD_INSTALL_DATA_DIR=/bin/install\ -d\ -o\ matthew\ -g\ matthew\ -m\ 755 BSD_INSTALL_MAN_DIR=/bin/install\ -d\ -o\ matthew\ -g\ matthew\ -m\ 755 BSD_INSTALL_GAME=/bin/install\ -c\ -s\ -o\ matthew\ -g\ matthew\ -m\ 0755 BSD_INSTALL_GAME_DATA=/bin/install\ -c\ -o\ matthew\ -g\ matthew\ -m\ 0644 BSD_INSTALL_GAME_DIR=/bin/install\ -d\ -o\ matthew\ -g\ matthew\ -m\ 0755 INSTALL_INFO= MAKEINFO=/home/matthew/pkgsrc/work/wip/py-distributed/work/.tools/bin/makeinfo FLEX= BISON= ITSTOOL=/home/matthew/pkgsrc/work/wip/py-distributed/work/.tools/bin/itstool GDBUS_CODEGEN=/home/matthew/pkgsrc/work/wip/py-distribu
 ted/work/.tools/bin/gdbus-codegen PKG_CONFIG=/home/matthew/pkgsrc/work/wip/py-distributed/work/.tools/bin/pkg-config PKG_CONFIG_LIBDIR=/home/matthew/pkgsrc/work/wip/py-distributed/work/.buildlink/lib64/pkgconfig:/home/matthew/pkgsrc/work/wip/py-distributed/work/.buildlink/lib/pkgconfig:/home/matthew/pkgsrc/work/wip/py-distributed/work/.buildlink/share/pkgconfig PKG_CONFIG_LOG=/home/matthew/pkgsrc/work/wip/py-distributed/work/.pkg-config.log PKG_CONFIG_PATH= CWRAPPERS_CONFIG_DIR=/home/matthew/pkgsrc/work/wip/py-distributed/work/.cwrapper/config /home/matthew/pkgsrc/install.20220728/bin/python3.10 -m pytest -s
-============================= test session starts ==============================
-platform linux -- Python 3.10.6, pytest-7.1.2, pluggy-0.13.1 -- /home/matthew/pkgsrc/install.20220728/bin/python3.10
-cachedir: .pytest_cache
-rootdir: /home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1, configfile: setup.cfg
-plugins: xdist-2.5.0, asdf-2.8.3, timeout-2.1.0, rerunfailures-10.2, cov-3.0.0, forked-1.3.0
-timeout: 300.0s
-timeout method: thread
-timeout func_only: False
-collecting ... collected 2991 items / 14 skipped
-
-distributed/cli/tests/test_dask_scheduler.py::test_defaults 2022-08-26 13:56:03,660 - distributed.scheduler - INFO - -----------------------------------------------
-2022-08-26 13:56:04,012 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 13:56:04,052 - distributed.scheduler - INFO - State start
-2022-08-26 13:56:04,054 - distributed.scheduler - INFO - -----------------------------------------------
-2022-08-26 13:56:04,054 - distributed.scheduler - INFO - Clear task state
-2022-08-26 13:56:04,054 - distributed.scheduler - INFO -   Scheduler at:  tcp://192.168.1.159:8786
-2022-08-26 13:56:04,054 - distributed.scheduler - INFO -   dashboard at:                     :8787
-2022-08-26 13:56:04,118 - distributed.scheduler - INFO - Receive client connection: Client-7f2c3adf-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:04,120 - distributed.core - INFO - Starting established connection
-2022-08-26 13:56:04,132 - distributed.scheduler - INFO - Remove client Client-7f2c3adf-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:04,132 - distributed.scheduler - INFO - Remove client Client-7f2c3adf-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:04,132 - distributed.scheduler - INFO - Close client connection: Client-7f2c3adf-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:04,132 - distributed._signals - INFO - Received signal SIGINT (2)
-2022-08-26 13:56:04,132 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 13:56:04,133 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 13:56:04,133 - distributed.scheduler - INFO - Stopped scheduler at 'tcp://192.168.1.159:8786'
-2022-08-26 13:56:04,133 - distributed.scheduler - INFO - End scheduler
-PASSED
-distributed/cli/tests/test_dask_scheduler.py::test_hostport 2022-08-26 13:56:04,717 - distributed.scheduler - INFO - -----------------------------------------------
-2022-08-26 13:56:04,720 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 13:56:04,722 - distributed.scheduler - INFO - State start
-2022-08-26 13:56:04,724 - distributed.scheduler - INFO - -----------------------------------------------
-2022-08-26 13:56:04,724 - distributed.scheduler - INFO - Clear task state
-2022-08-26 13:56:04,724 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:60989
-2022-08-26 13:56:04,724 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 13:56:04,778 - distributed.scheduler - INFO - Receive client connection: Client-7fcd6e2c-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:04,942 - distributed.core - INFO - Starting established connection
-2022-08-26 13:56:09,955 - distributed.scheduler - INFO - Remove client Client-7fcd6e2c-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:09,956 - distributed.scheduler - INFO - Remove client Client-7fcd6e2c-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:09,956 - distributed.scheduler - INFO - Close client connection: Client-7fcd6e2c-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:09,956 - distributed._signals - INFO - Received signal SIGINT (2)
-2022-08-26 13:56:09,956 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 13:56:09,956 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 13:56:09,957 - distributed.scheduler - INFO - Stopped scheduler at 'tcp://127.0.0.1:60989'
-2022-08-26 13:56:09,957 - distributed.scheduler - INFO - End scheduler
-PASSED
-distributed/cli/tests/test_dask_scheduler.py::test_no_dashboard 2022-08-26 13:56:10,495 - distributed.scheduler - INFO - -----------------------------------------------
-2022-08-26 13:56:10,498 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 13:56:10,500 - distributed.scheduler - INFO - State start
-2022-08-26 13:56:10,502 - distributed.scheduler - INFO - -----------------------------------------------
-2022-08-26 13:56:10,503 - distributed.scheduler - INFO - Clear task state
-2022-08-26 13:56:10,503 - distributed.scheduler - INFO -   Scheduler at:  tcp://192.168.1.159:8786
-2022-08-26 13:56:10,503 - distributed.scheduler - INFO -   dashboard at:                     :8787
-2022-08-26 13:56:10,564 - distributed.scheduler - INFO - Receive client connection: Client-833ec63e-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:10,748 - distributed.core - INFO - Starting established connection
-2022-08-26 13:56:10,761 - distributed.scheduler - INFO - Remove client Client-833ec63e-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:10,761 - distributed.scheduler - INFO - Remove client Client-833ec63e-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:10,761 - distributed.scheduler - INFO - Close client connection: Client-833ec63e-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:10,761 - distributed._signals - INFO - Received signal SIGINT (2)
-2022-08-26 13:56:10,761 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 13:56:10,762 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 13:56:10,762 - distributed.scheduler - INFO - Stopped scheduler at 'tcp://192.168.1.159:8786'
-2022-08-26 13:56:10,762 - distributed.scheduler - INFO - End scheduler
-PASSED
-distributed/cli/tests/test_dask_scheduler.py::test_dashboard 2022-08-26 13:56:11,300 - distributed.scheduler - INFO - -----------------------------------------------
-2022-08-26 13:56:11,654 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 13:56:11,695 - distributed.scheduler - INFO - State start
-2022-08-26 13:56:11,697 - distributed.scheduler - INFO - -----------------------------------------------
-2022-08-26 13:56:11,697 - distributed.scheduler - INFO - Clear task state
-2022-08-26 13:56:11,698 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:46511
-2022-08-26 13:56:11,698 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 13:56:12,279 - distributed.scheduler - INFO - Receive client connection: Client-83b99a14-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:12,280 - distributed.core - INFO - Starting established connection
-2022-08-26 13:56:12,293 - distributed.scheduler - INFO - Remove client Client-83b99a14-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:12,293 - distributed.scheduler - INFO - Remove client Client-83b99a14-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:12,293 - distributed.scheduler - INFO - Close client connection: Client-83b99a14-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:13,031 - distributed._signals - INFO - Received signal SIGINT (2)
-2022-08-26 13:56:13,032 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 13:56:13,032 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 13:56:13,032 - distributed.scheduler - INFO - Stopped scheduler at 'tcp://127.0.0.1:46511'
-2022-08-26 13:56:13,033 - distributed.scheduler - INFO - End scheduler
-PASSED
-distributed/cli/tests/test_dask_scheduler.py::test_dashboard_non_standard_ports 2022-08-26 13:56:13,670 - distributed.scheduler - INFO - -----------------------------------------------
-2022-08-26 13:56:14,043 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 13:56:14,083 - distributed.scheduler - INFO - State start
-2022-08-26 13:56:14,085 - distributed.scheduler - INFO - -----------------------------------------------
-2022-08-26 13:56:14,085 - distributed.scheduler - INFO - Clear task state
-2022-08-26 13:56:14,086 - distributed.scheduler - INFO -   Scheduler at: tcp://192.168.1.159:37053
-2022-08-26 13:56:14,086 - distributed.scheduler - INFO -   dashboard at:                    :55533
-2022-08-26 13:56:14,706 - distributed.scheduler - INFO - Receive client connection: Client-85230dc5-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:14,707 - distributed.core - INFO - Starting established connection
-2022-08-26 13:56:14,719 - distributed.scheduler - INFO - Remove client Client-85230dc5-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:14,719 - distributed.scheduler - INFO - Remove client Client-85230dc5-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:14,719 - distributed.scheduler - INFO - Close client connection: Client-85230dc5-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:14,982 - distributed._signals - INFO - Received signal SIGINT (2)
-2022-08-26 13:56:14,982 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 13:56:14,983 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 13:56:14,983 - distributed.scheduler - INFO - Stopped scheduler at 'tcp://192.168.1.159:37053'
-2022-08-26 13:56:14,983 - distributed.scheduler - INFO - End scheduler
-PASSED
-distributed/cli/tests/test_dask_scheduler.py::test_dashboard_allowlist 2022-08-26 13:56:15,577 - distributed.scheduler - INFO - -----------------------------------------------
-2022-08-26 13:56:15,950 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 13:56:15,990 - distributed.scheduler - INFO - State start
-2022-08-26 13:56:15,992 - distributed.scheduler - INFO - -----------------------------------------------
-2022-08-26 13:56:15,992 - distributed.scheduler - INFO - Clear task state
-2022-08-26 13:56:15,993 - distributed.scheduler - INFO -   Scheduler at: tcp://192.168.1.159:41183
-2022-08-26 13:56:15,993 - distributed.scheduler - INFO -   dashboard at:                     :8787
-2022-08-26 13:56:16,556 - distributed.scheduler - INFO - Receive client connection: Client-8645a78f-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:16,558 - distributed.core - INFO - Starting established connection
-2022-08-26 13:56:16,569 - distributed.scheduler - INFO - Remove client Client-8645a78f-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:16,569 - distributed.scheduler - INFO - Remove client Client-8645a78f-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:16,569 - distributed.scheduler - INFO - Close client connection: Client-8645a78f-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:17,067 - distributed._signals - INFO - Received signal SIGINT (2)
-2022-08-26 13:56:17,067 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 13:56:17,068 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 13:56:17,068 - distributed.scheduler - INFO - Stopped scheduler at 'tcp://192.168.1.159:41183'
-2022-08-26 13:56:17,068 - distributed.scheduler - INFO - End scheduler
-PASSED
-distributed/cli/tests/test_dask_scheduler.py::test_interface 2022-08-26 13:56:17,705 - distributed.scheduler - INFO - -----------------------------------------------
-2022-08-26 13:56:17,708 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 13:56:17,710 - distributed.scheduler - INFO - State start
-2022-08-26 13:56:17,712 - distributed.scheduler - INFO - -----------------------------------------------
-2022-08-26 13:56:17,713 - distributed.scheduler - INFO - Clear task state
-2022-08-26 13:56:17,713 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:52005
-2022-08-26 13:56:17,713 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 13:56:17,726 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:41011'
-2022-08-26 13:56:18,100 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36191
-2022-08-26 13:56:18,100 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36191
-2022-08-26 13:56:18,100 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34233
-2022-08-26 13:56:18,100 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:52005
-2022-08-26 13:56:18,100 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:56:18,100 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 13:56:18,100 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 13:56:18,100 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-tw3zz57l
-2022-08-26 13:56:18,100 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:56:18,287 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36191', status: init, memory: 0, processing: 0>
-2022-08-26 13:56:18,451 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36191
-2022-08-26 13:56:18,452 - distributed.core - INFO - Starting established connection
-2022-08-26 13:56:18,452 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:52005
-2022-08-26 13:56:18,452 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:56:18,453 - distributed.core - INFO - Starting established connection
-2022-08-26 13:56:18,454 - distributed.scheduler - INFO - Receive client connection: Client-878ac6bb-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:18,455 - distributed.core - INFO - Starting established connection
-2022-08-26 13:56:18,467 - distributed.scheduler - INFO - Remove client Client-878ac6bb-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:18,467 - distributed.scheduler - INFO - Remove client Client-878ac6bb-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:18,467 - distributed.scheduler - INFO - Close client connection: Client-878ac6bb-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:18,467 - distributed._signals - INFO - Received signal SIGINT (2)
-2022-08-26 13:56:18,468 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:41011'.
-2022-08-26 13:56:18,468 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 13:56:18,468 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36191
-2022-08-26 13:56:18,469 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-5380d46f-b28e-447b-913d-a56cc1786c95 Address tcp://127.0.0.1:36191 Status: Status.closing
-2022-08-26 13:56:18,469 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:36191', status: closing, memory: 0, processing: 0>
-2022-08-26 13:56:18,469 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:36191
-2022-08-26 13:56:18,469 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 13:56:18,588 - distributed.dask_worker - INFO - End worker
-2022-08-26 13:56:18,681 - distributed._signals - INFO - Received signal SIGINT (2)
-2022-08-26 13:56:18,682 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 13:56:18,682 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 13:56:18,682 - distributed.scheduler - INFO - Stopped scheduler at 'tcp://127.0.0.1:52005'
-2022-08-26 13:56:18,682 - distributed.scheduler - INFO - End scheduler
-PASSED
-distributed/cli/tests/test_dask_scheduler.py::test_pid_file 2022-08-26 13:56:19,216 - distributed.scheduler - INFO - -----------------------------------------------
-2022-08-26 13:56:19,219 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 13:56:19,221 - distributed.scheduler - INFO - State start
-2022-08-26 13:56:19,223 - distributed.scheduler - INFO - -----------------------------------------------
-2022-08-26 13:56:19,224 - distributed.scheduler - INFO - Clear task state
-2022-08-26 13:56:19,224 - distributed.scheduler - INFO -   Scheduler at:  tcp://192.168.1.159:8786
-2022-08-26 13:56:19,224 - distributed.scheduler - INFO -   dashboard at:                     :8787
-2022-08-26 13:56:19,230 - distributed._signals - INFO - Received signal SIGINT (2)
-2022-08-26 13:56:19,230 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 13:56:19,231 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 13:56:19,231 - distributed.scheduler - INFO - Stopped scheduler at 'tcp://192.168.1.159:8786'
-2022-08-26 13:56:19,231 - distributed.scheduler - INFO - End scheduler
-2022-08-26 13:56:19,724 - distributed._signals - INFO - Received signal SIGINT (2)
-2022-08-26 13:56:19,724 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:42197'.
-2022-08-26 13:56:19,730 - distributed.dask_worker - INFO - End worker
-ConnectionRefusedError: [Errno 111] Connection refused
-
-The above exception was the direct cause of the following exception:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/distributed/comm/core.py", line 291, in connect
-    comm = await asyncio.wait_for(
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/tasks.py", line 445, in wait_for
-    return fut.result()
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/distributed/comm/tcp.py", line 496, in connect
-    convert_stream_closed_error(self, e)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/distributed/comm/tcp.py", line 142, in convert_stream_closed_error
-    raise CommClosedError(f"in {obj}: {exc.__class__.__name__}: {exc}") from exc
-distributed.comm.core.CommClosedError: in <distributed.comm.tcp.TCPConnector object at 0x557fdde97980>: ConnectionRefusedError: [Errno 111] Connection refused
-
-During handling of the above exception, another exception occurred:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/distributed/core.py", line 1308, in _connect
-    comm = await connect(
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/distributed/comm/core.py", line 315, in connect
-    await asyncio.sleep(backoff)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/tasks.py", line 605, in sleep
-    return await future
-asyncio.exceptions.CancelledError
-
-During handling of the above exception, another exception occurred:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/distributed/core.py", line 481, in start
-    await asyncio.wait_for(self.start_unsafe(), timeout=timeout)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/tasks.py", line 408, in wait_for
-    return await fut
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/distributed/nanny.py", line 359, in start_unsafe
-    msg = await self.scheduler.register_nanny()
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/distributed/core.py", line 1151, in send_recv_from_rpc
-    comm = await self.pool.connect(self.addr)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/distributed/core.py", line 1372, in connect
-    return await connect_attempt
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/distributed/core.py", line 1328, in _connect
-    raise CommClosedError("ConnectionPool closing.")
-distributed.comm.core.CommClosedError: ConnectionPool closing.
-
-The above exception was the direct cause of the following exception:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/install.20220728/bin/dask-worker", line 33, in <module>
-    sys.exit(load_entry_point('distributed==2022.8.1', 'console_scripts', 'dask-worker')())
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/click/core.py", line 1130, in __call__
-    return self.main(*args, **kwargs)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/click/core.py", line 1055, in main
-    rv = self.invoke(ctx)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/click/core.py", line 1404, in invoke
-    return ctx.invoke(self.callback, **ctx.params)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/click/core.py", line 760, in invoke
-    return __callback(*args, **kwargs)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/distributed/cli/dask_worker.py", line 500, in main
-    asyncio.run(run())
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/runners.py", line 44, in run
-    return loop.run_until_complete(main)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/base_events.py", line 646, in run_until_complete
-    return future.result()
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/distributed/cli/dask_worker.py", line 497, in run
-    [task.result() for task in done]
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/distributed/cli/dask_worker.py", line 497, in <listcomp>
-    [task.result() for task in done]
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/distributed/cli/dask_worker.py", line 472, in wait_for_nannies_to_finish
-    await asyncio.gather(*nannies)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/tasks.py", line 650, in _wrap_awaitable
-    return (yield from awaitable.__await__())
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/distributed/core.py", line 489, in start
-    raise RuntimeError(f"{type(self).__name__} failed to start.") from exc
-RuntimeError: Nanny failed to start.
-PASSED
-distributed/cli/tests/test_dask_scheduler.py::test_scheduler_port_zero 2022-08-26 13:56:20,211 - distributed.scheduler - INFO - -----------------------------------------------
-2022-08-26 13:56:20,214 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 13:56:20,216 - distributed.scheduler - INFO - State start
-2022-08-26 13:56:20,218 - distributed.scheduler - INFO - -----------------------------------------------
-2022-08-26 13:56:20,219 - distributed.scheduler - INFO - Clear task state
-2022-08-26 13:56:20,219 - distributed.scheduler - INFO -   Scheduler at: tcp://192.168.1.159:37875
-2022-08-26 13:56:20,219 - distributed.scheduler - INFO -   dashboard at:                     :8787
-2022-08-26 13:56:20,231 - distributed.scheduler - INFO - Receive client connection: Client-89098d9b-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:20,413 - distributed.core - INFO - Starting established connection
-2022-08-26 13:56:20,424 - distributed.scheduler - INFO - Remove client Client-89098d9b-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:20,425 - distributed.scheduler - INFO - Remove client Client-89098d9b-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:20,425 - distributed.scheduler - INFO - Close client connection: Client-89098d9b-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:20,425 - distributed._signals - INFO - Received signal SIGINT (2)
-2022-08-26 13:56:20,425 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 13:56:20,425 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 13:56:20,426 - distributed.scheduler - INFO - Stopped scheduler at 'tcp://192.168.1.159:37875'
-2022-08-26 13:56:20,426 - distributed.scheduler - INFO - End scheduler
-PASSED
-distributed/cli/tests/test_dask_scheduler.py::test_dashboard_port_zero 2022-08-26 13:56:20,966 - distributed.scheduler - INFO - -----------------------------------------------
-2022-08-26 13:56:21,320 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 13:56:21,360 - distributed.scheduler - INFO - State start
-2022-08-26 13:56:21,362 - distributed.scheduler - INFO - -----------------------------------------------
-2022-08-26 13:56:21,363 - distributed.scheduler - INFO - Clear task state
-2022-08-26 13:56:21,363 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:47799
-2022-08-26 13:56:21,363 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:40669
-2022-08-26 13:56:21,636 - distributed.scheduler - INFO - Receive client connection: Client-897c10b2-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:21,638 - distributed.core - INFO - Starting established connection
-2022-08-26 13:56:21,650 - distributed.scheduler - INFO - Remove client Client-897c10b2-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:21,650 - distributed.scheduler - INFO - Remove client Client-897c10b2-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:21,650 - distributed.scheduler - INFO - Close client connection: Client-897c10b2-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:21,651 - distributed._signals - INFO - Received signal SIGINT (2)
-2022-08-26 13:56:21,651 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 13:56:21,651 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 13:56:21,651 - distributed.scheduler - INFO - Stopped scheduler at 'tcp://127.0.0.1:47799'
-2022-08-26 13:56:21,651 - distributed.scheduler - INFO - End scheduler
-PASSED
-distributed/cli/tests/test_dask_scheduler.py::test_preload_file 2022-08-26 13:56:22,236 - distributed.utils - INFO - Reload module scheduler_info from .py file
-2022-08-26 13:56:22,237 - distributed.preloading - INFO - Import preload module: /tmp/pytest-of-matthew/pytest-12/test_preload_file0/scheduler_info.py
-2022-08-26 13:56:22,237 - distributed.scheduler - INFO - -----------------------------------------------
-2022-08-26 13:56:22,238 - distributed.preloading - INFO - Creating preload: /tmp/pytest-of-matthew/pytest-12/test_preload_file0/scheduler_info.py
-2022-08-26 13:56:22,238 - distributed.utils - INFO - Reload module scheduler_info from .py file
-2022-08-26 13:56:22,238 - distributed.preloading - INFO - Import preload module: /tmp/pytest-of-matthew/pytest-12/test_preload_file0/scheduler_info.py
-2022-08-26 13:56:22,611 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 13:56:22,651 - distributed.scheduler - INFO - State start
-2022-08-26 13:56:22,652 - distributed.scheduler - INFO - -----------------------------------------------
-2022-08-26 13:56:22,653 - distributed.scheduler - INFO - Clear task state
-2022-08-26 13:56:22,653 - distributed.scheduler - INFO -   Scheduler at: tcp://192.168.1.159:54535
-2022-08-26 13:56:22,653 - distributed.scheduler - INFO -   dashboard at:                     :8787
-2022-08-26 13:56:22,653 - distributed.preloading - INFO - Run preload setup: /tmp/pytest-of-matthew/pytest-12/test_preload_file0/scheduler_info.py
-2022-08-26 13:56:22,664 - distributed.scheduler - INFO - Receive client connection: Client-8a3ed2d9-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:22,665 - distributed.core - INFO - Starting established connection
-2022-08-26 13:56:22,666 - distributed.worker - INFO - Run out-of-band function 'check_scheduler'
-2022-08-26 13:56:22,677 - distributed.scheduler - INFO - Remove client Client-8a3ed2d9-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:22,677 - distributed.scheduler - INFO - Remove client Client-8a3ed2d9-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:22,678 - distributed.scheduler - INFO - Close client connection: Client-8a3ed2d9-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:22,678 - distributed._signals - INFO - Received signal SIGINT (2)
-2022-08-26 13:56:22,678 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 13:56:22,679 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 13:56:22,679 - distributed.scheduler - INFO - Stopped scheduler at 'tcp://192.168.1.159:54535'
-2022-08-26 13:56:22,679 - distributed.scheduler - INFO - End scheduler
-PASSED
-distributed/cli/tests/test_dask_scheduler.py::test_preload_module 2022-08-26 13:56:23,268 - distributed.preloading - INFO - Import preload module: scheduler_info
-2022-08-26 13:56:23,269 - distributed.scheduler - INFO - -----------------------------------------------
-2022-08-26 13:56:23,269 - distributed.preloading - INFO - Creating preload: scheduler_info
-2022-08-26 13:56:23,269 - distributed.preloading - INFO - Import preload module: scheduler_info
-2022-08-26 13:56:23,641 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 13:56:23,681 - distributed.scheduler - INFO - State start
-2022-08-26 13:56:23,683 - distributed.scheduler - INFO - -----------------------------------------------
-2022-08-26 13:56:23,683 - distributed.scheduler - INFO - Clear task state
-2022-08-26 13:56:23,683 - distributed.scheduler - INFO -   Scheduler at: tcp://192.168.1.159:48359
-2022-08-26 13:56:23,683 - distributed.scheduler - INFO -   dashboard at:                     :8787
-2022-08-26 13:56:23,683 - distributed.preloading - INFO - Run preload setup: scheduler_info
-2022-08-26 13:56:23,686 - distributed.scheduler - INFO - Receive client connection: Client-8adb7268-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:23,687 - distributed.core - INFO - Starting established connection
-2022-08-26 13:56:23,688 - distributed.worker - INFO - Run out-of-band function 'check_scheduler'
-2022-08-26 13:56:23,699 - distributed.scheduler - INFO - Remove client Client-8adb7268-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:23,700 - distributed.scheduler - INFO - Remove client Client-8adb7268-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:23,700 - distributed.scheduler - INFO - Close client connection: Client-8adb7268-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:23,700 - distributed._signals - INFO - Received signal SIGINT (2)
-2022-08-26 13:56:23,700 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 13:56:23,700 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 13:56:23,701 - distributed.scheduler - INFO - Stopped scheduler at 'tcp://192.168.1.159:48359'
-2022-08-26 13:56:23,701 - distributed.scheduler - INFO - End scheduler
-PASSED
-distributed/cli/tests/test_dask_scheduler.py::test_preload_remote_module 2022-08-26 13:56:24,295 - distributed.scheduler - INFO - -----------------------------------------------
-2022-08-26 13:56:24,296 - distributed.preloading - INFO - Creating preload: http://localhost:50317/scheduler_info.py
-2022-08-26 13:56:24,296 - distributed.preloading - INFO - Downloading preload at http://localhost:50317/scheduler_info.py
-127.0.0.1 - - [26/Aug/2022 13:56:24] "GET /scheduler_info.py HTTP/1.1" 200 -
-2022-08-26 13:56:24,678 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 13:56:24,718 - distributed.scheduler - INFO - State start
-2022-08-26 13:56:24,720 - distributed.scheduler - INFO - -----------------------------------------------
-2022-08-26 13:56:24,721 - distributed.scheduler - INFO - Clear task state
-2022-08-26 13:56:24,721 - distributed.scheduler - INFO -   Scheduler at: tcp://192.168.1.159:39989
-2022-08-26 13:56:24,721 - distributed.scheduler - INFO -   dashboard at:                     :8787
-2022-08-26 13:56:24,721 - distributed.preloading - INFO - Run preload setup: http://localhost:50317/scheduler_info.py
-2022-08-26 13:56:24,725 - distributed.scheduler - INFO - Receive client connection: Client-8b774b9c-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:24,726 - distributed.core - INFO - Starting established connection
-2022-08-26 13:56:24,728 - distributed.worker - INFO - Run out-of-band function 'lambda'
-2022-08-26 13:56:24,739 - distributed.scheduler - INFO - Remove client Client-8b774b9c-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:24,739 - distributed.scheduler - INFO - Remove client Client-8b774b9c-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:24,739 - distributed.scheduler - INFO - Close client connection: Client-8b774b9c-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:24,739 - distributed._signals - INFO - Received signal SIGINT (2)
-2022-08-26 13:56:24,739 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 13:56:24,739 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 13:56:24,740 - distributed.scheduler - INFO - Stopped scheduler at 'tcp://192.168.1.159:39989'
-2022-08-26 13:56:24,740 - distributed.scheduler - INFO - End scheduler
-Serving HTTP on 0.0.0.0 port 50317 (http://0.0.0.0:50317/) ...
-
-Keyboard interrupt received, exiting.
-PASSED
-distributed/cli/tests/test_dask_scheduler.py::test_preload_config 2022-08-26 13:56:25,346 - distributed.scheduler - INFO - -----------------------------------------------
-2022-08-26 13:56:25,347 - distributed.preloading - INFO - Creating preload: 
-_scheduler_info = {}
-
-def dask_setup(scheduler):
-    _scheduler_info['address'] = scheduler.address
-    scheduler.foo = "bar"
-
-def get_scheduler_address():
-    return _scheduler_info['address']
-
-2022-08-26 13:56:25,347 - distributed.utils - INFO - Reload module tmp1xipxg1i from .py file
-2022-08-26 13:56:25,348 - distributed.preloading - INFO - Import preload module: /tmp/tmp1xipxg1i.py
-2022-08-26 13:56:25,726 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 13:56:25,766 - distributed.scheduler - INFO - State start
-2022-08-26 13:56:25,768 - distributed.scheduler - INFO - -----------------------------------------------
-2022-08-26 13:56:25,768 - distributed.scheduler - INFO - Clear task state
-2022-08-26 13:56:25,769 - distributed.scheduler - INFO -   Scheduler at:  tcp://192.168.1.159:8786
-2022-08-26 13:56:25,769 - distributed.scheduler - INFO -   dashboard at:                     :8787
-2022-08-26 13:56:25,769 - distributed.preloading - INFO - Run preload setup: 
-_scheduler_info = {}
-
-def dask_setup(scheduler):
-    _scheduler_info['address'] = scheduler.address
-    scheduler.foo = "bar"
-
-def get_scheduler_address():
-    return _scheduler_info['address']
-
-2022-08-26 13:56:25,779 - distributed.scheduler - INFO - Receive client connection: Client-8c180e6a-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:25,780 - distributed.core - INFO - Starting established connection
-2022-08-26 13:56:25,782 - distributed.worker - INFO - Run out-of-band function 'lambda'
-2022-08-26 13:56:25,793 - distributed.scheduler - INFO - Remove client Client-8c180e6a-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:25,793 - distributed.scheduler - INFO - Remove client Client-8c180e6a-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:25,794 - distributed.scheduler - INFO - Close client connection: Client-8c180e6a-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:25,794 - distributed._signals - INFO - Received signal SIGINT (2)
-2022-08-26 13:56:25,794 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 13:56:25,794 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 13:56:25,794 - distributed.scheduler - INFO - Stopped scheduler at 'tcp://192.168.1.159:8786'
-2022-08-26 13:56:25,795 - distributed.scheduler - INFO - End scheduler
-PASSED
-distributed/cli/tests/test_dask_scheduler.py::test_preload_command 2022-08-26 13:56:26,380 - distributed.utils - INFO - Reload module passthrough_info from .py file
-2022-08-26 13:56:26,381 - distributed.preloading - INFO - Import preload module: /tmp/tmpui77afl3/passthrough_info.py
-2022-08-26 13:56:26,381 - distributed.scheduler - INFO - -----------------------------------------------
-2022-08-26 13:56:26,382 - distributed.preloading - INFO - Creating preload: /tmp/tmpui77afl3/passthrough_info.py
-2022-08-26 13:56:26,382 - distributed.utils - INFO - Reload module passthrough_info from .py file
-2022-08-26 13:56:26,383 - distributed.preloading - INFO - Import preload module: /tmp/tmpui77afl3/passthrough_info.py
-2022-08-26 13:56:26,753 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 13:56:26,792 - distributed.scheduler - INFO - State start
-2022-08-26 13:56:26,794 - distributed.scheduler - INFO - -----------------------------------------------
-2022-08-26 13:56:26,795 - distributed.scheduler - INFO - Clear task state
-2022-08-26 13:56:26,795 - distributed.scheduler - INFO -   Scheduler at:  tcp://192.168.1.159:8786
-2022-08-26 13:56:26,795 - distributed.scheduler - INFO -   dashboard at:                     :8787
-2022-08-26 13:56:26,795 - distributed.preloading - INFO - Run preload setup: /tmp/tmpui77afl3/passthrough_info.py
-2022-08-26 13:56:26,806 - distributed.scheduler - INFO - Receive client connection: Client-8cb6b8f6-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:26,808 - distributed.core - INFO - Starting established connection
-2022-08-26 13:56:26,809 - distributed.worker - INFO - Run out-of-band function 'check_passthrough'
-2022-08-26 13:56:26,820 - distributed.scheduler - INFO - Remove client Client-8cb6b8f6-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:26,820 - distributed.scheduler - INFO - Remove client Client-8cb6b8f6-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:26,820 - distributed.scheduler - INFO - Close client connection: Client-8cb6b8f6-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:26,820 - distributed._signals - INFO - Received signal SIGINT (2)
-2022-08-26 13:56:26,820 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 13:56:26,821 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 13:56:26,821 - distributed.scheduler - INFO - Stopped scheduler at 'tcp://192.168.1.159:8786'
-2022-08-26 13:56:26,821 - distributed.scheduler - INFO - End scheduler
-/tmp/tmpa4ytqhn_.
-PASSED
-distributed/cli/tests/test_dask_scheduler.py::test_preload_command_default 2022-08-26 13:56:27,406 - distributed.utils - INFO - Reload module passthrough_info from .py file
-2022-08-26 13:56:27,407 - distributed.preloading - INFO - Import preload module: /tmp/tmp_zrtqsy0/passthrough_info.py
-2022-08-26 13:56:27,408 - distributed.scheduler - INFO - -----------------------------------------------
-2022-08-26 13:56:27,408 - distributed.preloading - INFO - Creating preload: /tmp/tmp_zrtqsy0/passthrough_info.py
-2022-08-26 13:56:27,409 - distributed.utils - INFO - Reload module passthrough_info from .py file
-2022-08-26 13:56:27,409 - distributed.preloading - INFO - Import preload module: /tmp/tmp_zrtqsy0/passthrough_info.py
-2022-08-26 13:56:27,788 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 13:56:27,828 - distributed.scheduler - INFO - State start
-2022-08-26 13:56:27,830 - distributed.scheduler - INFO - -----------------------------------------------
-2022-08-26 13:56:27,830 - distributed.scheduler - INFO - Clear task state
-2022-08-26 13:56:27,831 - distributed.scheduler - INFO -   Scheduler at:  tcp://192.168.1.159:8786
-2022-08-26 13:56:27,831 - distributed.scheduler - INFO -   dashboard at:                     :8787
-2022-08-26 13:56:27,831 - distributed.preloading - INFO - Run preload setup: /tmp/tmp_zrtqsy0/passthrough_info.py
-2022-08-26 13:56:27,842 - distributed.scheduler - INFO - Receive client connection: Client-8d533443-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:27,844 - distributed.core - INFO - Starting established connection
-2022-08-26 13:56:27,845 - distributed.worker - INFO - Run out-of-band function 'check_passthrough'
-2022-08-26 13:56:27,856 - distributed.scheduler - INFO - Remove client Client-8d533443-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:27,857 - distributed.scheduler - INFO - Remove client Client-8d533443-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:27,857 - distributed.scheduler - INFO - Close client connection: Client-8d533443-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:27,857 - distributed._signals - INFO - Received signal SIGINT (2)
-2022-08-26 13:56:27,857 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 13:56:27,857 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 13:56:27,858 - distributed.scheduler - INFO - Stopped scheduler at 'tcp://192.168.1.159:8786'
-2022-08-26 13:56:27,858 - distributed.scheduler - INFO - End scheduler
-/tmp/tmp0tazvwqi.
-PASSED
-distributed/cli/tests/test_dask_scheduler.py::test_version_option PASSED
-distributed/cli/tests/test_dask_scheduler.py::test_idle_timeout SKIPPED
-distributed/cli/tests/test_dask_scheduler.py::test_restores_signal_handler SKIPPED
-distributed/cli/tests/test_dask_scheduler.py::test_multiple_workers_2 2022-08-26 13:56:28,455 - distributed.scheduler - INFO - -----------------------------------------------
-2022-08-26 13:56:28,455 - distributed.utils - INFO - Reload module tmpkzjh20ap from .py file
-2022-08-26 13:56:28,456 - distributed.preloading - INFO - Import preload module: /tmp/tmpkzjh20ap.py
-2022-08-26 13:56:28,458 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 13:56:28,458 - distributed.preloading - INFO - Creating preload: 
-def dask_setup(worker):
-    worker.foo = 'setup'
-
-2022-08-26 13:56:28,458 - distributed.utils - INFO - Reload module tmpzqxufmxs from .py file
-2022-08-26 13:56:28,459 - distributed.preloading - INFO - Import preload module: /tmp/tmpzqxufmxs.py
-2022-08-26 13:56:28,460 - distributed.scheduler - INFO - State start
-2022-08-26 13:56:28,462 - distributed.preloading - INFO - Run preload setup: 
-def dask_setup(worker):
-    worker.foo = 'setup'
-
-2022-08-26 13:56:28,462 - distributed.scheduler - INFO - -----------------------------------------------
-2022-08-26 13:56:28,463 - distributed.scheduler - INFO - Clear task state
-2022-08-26 13:56:28,463 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:52337
-2022-08-26 13:56:28,463 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 13:56:28,466 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:34001'
-2022-08-26 13:56:28,481 - distributed.scheduler - INFO - Receive client connection: Client-8df21afe-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:28,671 - distributed.core - INFO - Starting established connection
-2022-08-26 13:56:28,838 - distributed.preloading - INFO - Creating preload: 
-def dask_setup(worker):
-    worker.foo = 'setup'
-
-2022-08-26 13:56:28,839 - distributed.utils - INFO - Reload module tmp2y_4z7jg from .py file
-2022-08-26 13:56:28,839 - distributed.preloading - INFO - Import preload module: /tmp/tmp2y_4z7jg.py
-2022-08-26 13:56:28,844 - distributed.preloading - INFO - Run preload setup: 
-def dask_setup(worker):
-    worker.foo = 'setup'
-
-2022-08-26 13:56:28,844 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:41637
-2022-08-26 13:56:28,844 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:41637
-2022-08-26 13:56:28,844 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33581
-2022-08-26 13:56:28,844 - distributed.worker - INFO - Waiting to connect to:      tcp://localhost:52337
-2022-08-26 13:56:28,845 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:56:28,845 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 13:56:28,845 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 13:56:28,845 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-nkgpd28d
-2022-08-26 13:56:28,845 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:56:29,014 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:41637', status: init, memory: 0, processing: 0>
-2022-08-26 13:56:29,014 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:41637
-2022-08-26 13:56:29,014 - distributed.core - INFO - Starting established connection
-2022-08-26 13:56:29,014 - distributed.worker - INFO -         Registered to:      tcp://localhost:52337
-2022-08-26 13:56:29,014 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:56:29,015 - distributed.core - INFO - Starting established connection
-2022-08-26 13:56:29,078 - distributed.worker - INFO - Run out-of-band function 'lambda'
-2022-08-26 13:56:29,081 - distributed.worker - INFO - Run out-of-band function 'lambda'
-2022-08-26 13:56:29,082 - distributed.scheduler - INFO - Remove client Client-8df21afe-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:29,082 - distributed.scheduler - INFO - Remove client Client-8df21afe-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:29,082 - distributed.scheduler - INFO - Close client connection: Client-8df21afe-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:29,083 - distributed._signals - INFO - Received signal SIGINT (2)
-2022-08-26 13:56:29,083 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:34001'.
-2022-08-26 13:56:29,083 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 13:56:29,083 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:41637
-2022-08-26 13:56:29,084 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-b35a7f3a-ae98-4a24-8633-db558803a7de Address tcp://127.0.0.1:41637 Status: Status.closing
-2022-08-26 13:56:29,084 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:41637', status: closing, memory: 0, processing: 0>
-2022-08-26 13:56:29,084 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:41637
-2022-08-26 13:56:29,084 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 13:56:29,203 - distributed.dask_worker - INFO - End worker
-2022-08-26 13:56:29,297 - distributed._signals - INFO - Received signal SIGINT (2)
-2022-08-26 13:56:29,297 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 13:56:29,297 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 13:56:29,298 - distributed.scheduler - INFO - Stopped scheduler at 'tcp://127.0.0.1:52337'
-2022-08-26 13:56:29,298 - distributed.scheduler - INFO - End scheduler
-PASSED
-distributed/cli/tests/test_dask_scheduler.py::test_multiple_workers 2022-08-26 13:56:29,844 - distributed.scheduler - INFO - -----------------------------------------------
-2022-08-26 13:56:29,847 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 13:56:29,849 - distributed.scheduler - INFO - State start
-2022-08-26 13:56:29,851 - distributed.scheduler - INFO - -----------------------------------------------
-2022-08-26 13:56:29,852 - distributed.scheduler - INFO - Clear task state
-2022-08-26 13:56:29,852 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:38869
-2022-08-26 13:56:29,852 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 13:56:29,853 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:46483'
-2022-08-26 13:56:29,855 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:40585'
-2022-08-26 13:56:30,210 - distributed.scheduler - INFO - Receive client connection: Client-8ec599e6-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:30,244 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35199
-2022-08-26 13:56:30,244 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35199
-2022-08-26 13:56:30,244 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35089
-2022-08-26 13:56:30,244 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:38869
-2022-08-26 13:56:30,244 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:56:30,244 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 13:56:30,244 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 13:56:30,244 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-gal6fddn
-2022-08-26 13:56:30,244 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:56:30,244 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35251
-2022-08-26 13:56:30,245 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35251
-2022-08-26 13:56:30,245 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46115
-2022-08-26 13:56:30,245 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:38869
-2022-08-26 13:56:30,245 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:56:30,245 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 13:56:30,245 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 13:56:30,245 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-_ptx1ve2
-2022-08-26 13:56:30,245 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:56:30,403 - distributed.core - INFO - Starting established connection
-2022-08-26 13:56:30,577 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:35251', status: init, memory: 0, processing: 0>
-2022-08-26 13:56:30,578 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:35251
-2022-08-26 13:56:30,578 - distributed.core - INFO - Starting established connection
-2022-08-26 13:56:30,578 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:38869
-2022-08-26 13:56:30,578 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:56:30,578 - distributed.core - INFO - Starting established connection
-2022-08-26 13:56:30,580 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:35199', status: init, memory: 0, processing: 0>
-2022-08-26 13:56:30,581 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:35199
-2022-08-26 13:56:30,581 - distributed.core - INFO - Starting established connection
-2022-08-26 13:56:30,581 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:38869
-2022-08-26 13:56:30,581 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:56:30,582 - distributed.core - INFO - Starting established connection
-2022-08-26 13:56:30,608 - distributed.scheduler - INFO - Remove client Client-8ec599e6-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:30,608 - distributed.scheduler - INFO - Remove client Client-8ec599e6-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:30,608 - distributed.scheduler - INFO - Close client connection: Client-8ec599e6-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:30,608 - distributed._signals - INFO - Received signal SIGINT (2)
-2022-08-26 13:56:30,609 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:46483'.
-2022-08-26 13:56:30,609 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 13:56:30,610 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:35199
-2022-08-26 13:56:30,611 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-f6ec536c-1ab3-47a9-bf7c-cc67a6184f76 Address tcp://127.0.0.1:35199 Status: Status.closing
-2022-08-26 13:56:30,611 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:35199', status: closing, memory: 0, processing: 0>
-2022-08-26 13:56:30,611 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:35199
-2022-08-26 13:56:30,736 - distributed.dask_worker - INFO - End worker
-2022-08-26 13:56:30,822 - distributed._signals - INFO - Received signal SIGINT (2)
-2022-08-26 13:56:30,822 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:40585'.
-2022-08-26 13:56:30,822 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 13:56:30,823 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:35251
-2022-08-26 13:56:30,824 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-ae84ae66-6ee1-4d2e-8d48-9d64c59b1227 Address tcp://127.0.0.1:35251 Status: Status.closing
-2022-08-26 13:56:30,824 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:35251', status: closing, memory: 0, processing: 0>
-2022-08-26 13:56:30,824 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:35251
-2022-08-26 13:56:30,824 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 13:56:30,943 - distributed.dask_worker - INFO - End worker
-2022-08-26 13:56:31,036 - distributed._signals - INFO - Received signal SIGINT (2)
-2022-08-26 13:56:31,036 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 13:56:31,036 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 13:56:31,037 - distributed.scheduler - INFO - Stopped scheduler at 'tcp://127.0.0.1:38869'
-2022-08-26 13:56:31,037 - distributed.scheduler - INFO - End scheduler
-PASSED
-distributed/cli/tests/test_dask_scheduler.py::test_signal_handling[Signals.SIGINT] SKIPPED
-distributed/cli/tests/test_dask_scheduler.py::test_signal_handling[Signals.SIGTERM] SKIPPED
-distributed/cli/tests/test_dask_spec.py::test_text 2022-08-26 13:56:31,583 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 13:56:31,585 - distributed.scheduler - INFO - State start
-2022-08-26 13:56:31,587 - distributed.scheduler - INFO - Clear task state
-2022-08-26 13:56:31,588 - distributed.scheduler - INFO -   Scheduler at: tcp://192.168.1.159:58731
-2022-08-26 13:56:31,588 - distributed.scheduler - INFO -   dashboard at:                     :8787
-2022-08-26 13:56:31,592 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34615
-2022-08-26 13:56:31,592 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34615
-2022-08-26 13:56:31,592 - distributed.worker - INFO -           Worker name:                        foo
-2022-08-26 13:56:31,592 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46863
-2022-08-26 13:56:31,592 - distributed.worker - INFO - Waiting to connect to:      tcp://localhost:58731
-2022-08-26 13:56:31,592 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:56:31,592 - distributed.worker - INFO -               Threads:                          3
-2022-08-26 13:56:31,592 - distributed.worker - INFO -                Memory:                  15.71 GiB
-2022-08-26 13:56:31,592 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-3zdxbgil
-2022-08-26 13:56:31,592 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:56:31,784 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:34615', name: foo, status: init, memory: 0, processing: 0>
-2022-08-26 13:56:31,952 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:34615
-2022-08-26 13:56:31,952 - distributed.core - INFO - Starting established connection
-2022-08-26 13:56:31,952 - distributed.worker - INFO -         Registered to:      tcp://localhost:58731
-2022-08-26 13:56:31,952 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:56:31,953 - distributed.core - INFO - Starting established connection
-2022-08-26 13:56:32,696 - distributed.scheduler - INFO - Receive client connection: Client-8fcf0ada-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:32,697 - distributed.core - INFO - Starting established connection
-2022-08-26 13:56:32,709 - distributed.scheduler - INFO - Remove client Client-8fcf0ada-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:32,709 - distributed.scheduler - INFO - Remove client Client-8fcf0ada-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:32,709 - distributed.scheduler - INFO - Close client connection: Client-8fcf0ada-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:32,710 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:34615', name: foo, status: running, memory: 0, processing: 0>
-2022-08-26 13:56:32,710 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:34615
-2022-08-26 13:56:32,710 - distributed.scheduler - INFO - Lost all workers
-
-Aborted!
-
-Aborted!
-PASSED
-distributed/cli/tests/test_dask_spec.py::test_file 2022-08-26 13:56:33,046 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 13:56:33,049 - distributed.scheduler - INFO - State start
-2022-08-26 13:56:33,050 - distributed.scheduler - INFO - Clear task state
-2022-08-26 13:56:33,051 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:33913
-2022-08-26 13:56:33,051 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:45265
-2022-08-26 13:56:33,054 - distributed.scheduler - INFO - Receive client connection: Client-90e86c66-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:33,054 - distributed.core - INFO - Starting established connection
-2022-08-26 13:56:33,422 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43745
-2022-08-26 13:56:33,422 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43745
-2022-08-26 13:56:33,422 - distributed.worker - INFO -           Worker name:                        foo
-2022-08-26 13:56:33,422 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40405
-2022-08-26 13:56:33,422 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33913
-2022-08-26 13:56:33,422 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:56:33,422 - distributed.worker - INFO -               Threads:                          3
-2022-08-26 13:56:33,422 - distributed.worker - INFO -                Memory:                  15.71 GiB
-2022-08-26 13:56:33,422 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-5mhh0hv8
-2022-08-26 13:56:33,422 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:56:33,614 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:43745', name: foo, status: init, memory: 0, processing: 0>
-2022-08-26 13:56:33,615 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:43745
-2022-08-26 13:56:33,615 - distributed.core - INFO - Starting established connection
-2022-08-26 13:56:33,615 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33913
-2022-08-26 13:56:33,615 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:56:33,616 - distributed.core - INFO - Starting established connection
-
-Aborted!
-2022-08-26 13:56:33,827 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:43745', name: foo, status: running, memory: 0, processing: 0>
-2022-08-26 13:56:33,828 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:43745
-2022-08-26 13:56:33,828 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 13:56:33,828 - distributed.scheduler - INFO - Remove client Client-90e86c66-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:33,829 - distributed.scheduler - INFO - Remove client Client-90e86c66-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:33,829 - distributed.scheduler - INFO - Close client connection: Client-90e86c66-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:33,829 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 13:56:33,829 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/cli/tests/test_dask_spec.py::test_errors PASSED
-distributed/cli/tests/test_dask_worker.py::test_apportion_ports[args0-expect0] PASSED
-distributed/cli/tests/test_dask_worker.py::test_apportion_ports[args1-expect1] PASSED
-distributed/cli/tests/test_dask_worker.py::test_apportion_ports[args2-expect2] PASSED
-distributed/cli/tests/test_dask_worker.py::test_apportion_ports[args3-expect3] PASSED
-distributed/cli/tests/test_dask_worker.py::test_apportion_ports[args4-expect4] PASSED
-distributed/cli/tests/test_dask_worker.py::test_apportion_ports[args5-expect5] PASSED
-distributed/cli/tests/test_dask_worker.py::test_apportion_ports[args6-expect6] PASSED
-distributed/cli/tests/test_dask_worker.py::test_apportion_ports[args7-expect7] PASSED
-distributed/cli/tests/test_dask_worker.py::test_apportion_ports[args8-expect8] PASSED
-distributed/cli/tests/test_dask_worker.py::test_apportion_ports[args9-expect9] PASSED
-distributed/cli/tests/test_dask_worker.py::test_apportion_ports[args10-expect10] PASSED
-distributed/cli/tests/test_dask_worker.py::test_apportion_ports[args11-expect11] PASSED
-distributed/cli/tests/test_dask_worker.py::test_apportion_ports[args12-expect12] PASSED
-distributed/cli/tests/test_dask_worker.py::test_apportion_ports[args13-expect13] PASSED
-distributed/cli/tests/test_dask_worker.py::test_apportion_ports[args14-expect14] PASSED
-distributed/cli/tests/test_dask_worker.py::test_apportion_ports[args15-expect15] PASSED
-distributed/cli/tests/test_dask_worker.py::test_apportion_ports[args16-expect16] PASSED
-distributed/cli/tests/test_dask_worker.py::test_apportion_ports[args17-expect17] PASSED
-distributed/cli/tests/test_dask_worker.py::test_apportion_ports_bad PASSED
-distributed/cli/tests/test_dask_worker.py::test_nanny_worker_ports SKIPPED
-distributed/cli/tests/test_dask_worker.py::test_nanny_worker_port_range SKIPPED
-distributed/cli/tests/test_dask_worker.py::test_nanny_worker_port_range_too_many_workers_raises 2022-08-26 13:56:34,708 - distributed.scheduler - INFO - State start
-2022-08-26 13:56:34,710 - distributed.scheduler - INFO - Clear task state
-2022-08-26 13:56:34,710 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:43183
-2022-08-26 13:56:34,710 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:38719
-2022-08-26 13:56:35,136 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 13:56:35,136 - distributed.scheduler - INFO - Scheduler closing all comms
-b'Traceback (most recent call last):\n'
-b'  File "/home/matthew/pkgsrc/install.20220728/bin/dask-worker", line 33, in <module>\n'
-b"    sys.exit(load_entry_point('distributed==2022.8.1', 'console_scripts', 'dask-worker')())\n"
-b'  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/click/core.py", line 1130, in __call__\n'
-b'    return self.main(*args, **kwargs)\n'
-b'  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/click/core.py", line 1055, in main\n'
-b'    rv = self.invoke(ctx)\n'
-b'  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/click/core.py", line 1404, in invoke\n'
-b'    return ctx.invoke(self.callback, **ctx.params)\n'
-b'  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/click/core.py", line 760, in invoke\n'
-b'    return __callback(*args, **kwargs)\n'
-b'  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/distributed/cli/dask_worker.py", line 423, in main\n'
-b'    port_kwargs = _apportion_ports(worker_port, nanny_port, n_workers, nanny)\n'
-b'  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/distributed/cli/dask_worker.py", line 565, in _apportion_ports\n'
-b'    raise ValueError(\n'
-b'ValueError: Not enough ports in range --worker_port 9684:9685 --nanny_port 9686:9687 for 3 workers\n'
------- stdout: returncode 1, ['/home/matthew/pkgsrc/install.20220728/bin/dask-worker', 'tcp://127.0.0.1:43183', '--nworkers', '3', '--host', '127.0.0.1', '--worker-port', '9684:9685', '--nanny-port', '9686:9687', '--no-dashboard'] ------
-Exception ignored in atexit callback: <function _close_global_client at 0x564dd48c1480>
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/distributed/client.py", line 5386, in _close_global_client
-    c = _get_global_client()
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/distributed/client.py", line 122, in _get_global_client
-    L = sorted(list(_global_clients), reverse=True)
-KeyboardInterrupt: 
-
-PASSED
-distributed/cli/tests/test_dask_worker.py::test_memory_limit SKIPPED
-distributed/cli/tests/test_dask_worker.py::test_no_nanny 2022-08-26 13:56:35,267 - distributed.scheduler - INFO - State start
-2022-08-26 13:56:35,269 - distributed.scheduler - INFO - Clear task state
-2022-08-26 13:56:35,269 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:45627
-2022-08-26 13:56:35,269 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:43803
-2022-08-26 13:56:35,272 - distributed.scheduler - INFO - Receive client connection: Client-923af4ad-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:35,273 - distributed.core - INFO - Starting established connection
-2022-08-26 13:56:35,643 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34561
-2022-08-26 13:56:35,643 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34561
-2022-08-26 13:56:35,643 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46487
-2022-08-26 13:56:35,643 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:45627
-2022-08-26 13:56:35,643 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:56:35,643 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 13:56:35,643 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 13:56:35,643 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-qr1h_qp9
-2022-08-26 13:56:35,643 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:56:35,833 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:34561', status: init, memory: 0, processing: 0>
-2022-08-26 13:56:35,833 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:34561
-2022-08-26 13:56:35,834 - distributed.core - INFO - Starting established connection
-2022-08-26 13:56:35,834 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:45627
-2022-08-26 13:56:35,834 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:56:35,834 - distributed.core - INFO - Starting established connection
-2022-08-26 13:56:35,882 - distributed._signals - INFO - Received signal SIGINT (2)
-2022-08-26 13:56:35,882 - distributed.dask_worker - INFO - End worker
-2022-08-26 13:56:36,046 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:34561', status: running, memory: 0, processing: 0>
-2022-08-26 13:56:36,046 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:34561
-2022-08-26 13:56:36,046 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 13:56:36,047 - distributed.scheduler - INFO - Remove client Client-923af4ad-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:36,047 - distributed.scheduler - INFO - Remove client Client-923af4ad-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:36,047 - distributed.scheduler - INFO - Close client connection: Client-923af4ad-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:36,048 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 13:56:36,048 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/cli/tests/test_dask_worker.py::test_reconnect_deprecated 2022-08-26 13:56:36,180 - distributed.scheduler - INFO - State start
-2022-08-26 13:56:36,182 - distributed.scheduler - INFO - Clear task state
-2022-08-26 13:56:36,182 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:35135
-2022-08-26 13:56:36,182 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:34053
-2022-08-26 13:56:36,185 - distributed.scheduler - INFO - Receive client connection: Client-92c636ce-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:36,185 - distributed.core - INFO - Starting established connection
-2022-08-26 13:56:37,754 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:46287', status: init, memory: 0, processing: 0>
-2022-08-26 13:56:37,754 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:46287
-2022-08-26 13:56:37,754 - distributed.core - INFO - Starting established connection
-2022-08-26 13:56:37,786 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 13:56:37,786 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 13:56:37,787 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:46287', status: running, memory: 0, processing: 0>
-2022-08-26 13:56:37,787 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:46287
-2022-08-26 13:56:37,787 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 13:56:44,269 - tornado.application - ERROR - Exception in callback <bound method Client._heartbeat of <Client: 'tcp://127.0.0.1:35135' processes=1 threads=12, memory=62.82 GiB>>
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/tornado/ioloop.py", line 921, in _run
-    val = self.callback()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/client.py", line 1374, in _heartbeat
-    self.scheduler_comm.send({"op": "heartbeat-client"})
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/batched.py", line 156, in send
-    raise CommClosedError(f"Comm {self.comm!r} already closed.")
-distributed.comm.core.CommClosedError: Comm <TCP (closed) Client->Scheduler local=tcp://127.0.0.1:34686 remote=tcp://127.0.0.1:35135> already closed.
-b'2022-08-26 13:56:36,550 - distributed.dask_worker - ERROR - The `--reconnect` option has been removed. To improve cluster stability, workers now always shut down in the face of network disconnects. For details, or if this is an issue for you, see https://github.com/dask/distributed/issues/6350.\n'
-b'2022-08-26 13:56:36,973 - distributed.dask_worker - WARNING - The `--no-reconnect/--reconnect` flag is deprecated, and will be removed in a future release. Worker reconnection is now always disabled, so `--no-reconnect` is unnecessary. See https://github.com/dask/distributed/issues/6350 for details.\n'
------- stdout: returncode 0, ['/home/matthew/pkgsrc/install.20220728/bin/dask-worker', 'tcp://127.0.0.1:35135', '--no-reconnect'] ------
-2022-08-26 13:56:36,977 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:41775'
-2022-08-26 13:56:37,750 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:46287
-2022-08-26 13:56:37,750 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:46287
-2022-08-26 13:56:37,750 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33343
-2022-08-26 13:56:37,750 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35135
-2022-08-26 13:56:37,750 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:56:37,750 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 13:56:37,751 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 13:56:37,751 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-v0r7eib0
-2022-08-26 13:56:37,751 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:56:37,754 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35135
-2022-08-26 13:56:37,754 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:56:37,755 - distributed.core - INFO - Starting established connection
-2022-08-26 13:56:37,787 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:46287
-2022-08-26 13:56:37,787 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-2d4ba7dc-791a-40cc-a8fd-0dfdb07c23f4 Address tcp://127.0.0.1:46287 Status: Status.closing
-2022-08-26 13:56:37,788 - distributed._signals - INFO - Received signal SIGINT (2)
-2022-08-26 13:56:37,787 - distributed.batched - INFO - Batched Comm Closed <TCP (closed) Worker->Scheduler local=tcp://127.0.0.1:59936 remote=tcp://127.0.0.1:35135>
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/distributed/batched.py", line 115, in _background_send
-    nbytes = yield coro
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/tornado/gen.py", line 769, in run
-    value = future.result()
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/distributed/comm/tcp.py", line 269, in write
-    raise CommClosedError()
-distributed.comm.core.CommClosedError
-2022-08-26 13:56:37,788 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:41775'.
-2022-08-26 13:56:37,788 - distributed.nanny - INFO - Nanny asking worker to close
-Task exception was never retrieved
-future: <Task finished name='Task-11' coro=<Worker.close() done, defined at /home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/distributed/utils.py:797> exception=CommClosedError('in <TCP (closed) ConnectionPool.close_gracefully local=tcp://127.0.0.1:40126 remote=tcp://127.0.0.1:41775>: Stream is closed')>
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/distributed/comm/tcp.py", line 225, in read
-    frames_nbytes = await stream.read_bytes(fmt_size)
-tornado.iostream.StreamClosedError: Stream is closed
-
-The above exception was the direct cause of the following exception:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/distributed/utils.py", line 799, in wrapper
-    return await func(*args, **kwargs)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/distributed/worker.py", line 1491, in close
-    await r.close_gracefully()
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/distributed/core.py", line 1154, in send_recv_from_rpc
-    return await send_recv(comm=comm, op=key, **kwargs)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/distributed/core.py", line 919, in send_recv
-    response = await comm.read(deserializers=deserializers)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/distributed/comm/tcp.py", line 241, in read
-    convert_stream_closed_error(self, e)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/distributed/comm/tcp.py", line 144, in convert_stream_closed_error
-    raise CommClosedError(f"in {obj}: {exc}") from exc
-distributed.comm.core.CommClosedError: in <TCP (closed) ConnectionPool.close_gracefully local=tcp://127.0.0.1:40126 remote=tcp://127.0.0.1:41775>: Stream is closed
-2022-08-26 13:56:44,189 - distributed.nanny - WARNING - Worker process still alive after 6.399999237060547 seconds, killing
-2022-08-26 13:56:44,193 - distributed.nanny - INFO - Worker process 520645 was killed by signal 9
-2022-08-26 13:56:44,194 - distributed.dask_worker - INFO - End worker
-
-PASSED
-distributed/cli/tests/test_dask_worker.py::test_resources SKIPPED (n...)
-distributed/cli/tests/test_dask_worker.py::test_local_directory[--nanny] SKIPPED
-distributed/cli/tests/test_dask_worker.py::test_local_directory[--no-nanny] SKIPPED
-distributed/cli/tests/test_dask_worker.py::test_scheduler_file[--nanny] SKIPPED
-distributed/cli/tests/test_dask_worker.py::test_scheduler_file[--no-nanny] SKIPPED
-distributed/cli/tests/test_dask_worker.py::test_scheduler_address_env SKIPPED
-distributed/cli/tests/test_dask_worker.py::test_nworkers_requires_nanny 2022-08-26 13:56:44,408 - distributed.scheduler - INFO - State start
-2022-08-26 13:56:44,409 - distributed.scheduler - INFO - Clear task state
-2022-08-26 13:56:44,410 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:42511
-2022-08-26 13:56:44,410 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:44195
-2022-08-26 13:56:44,831 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 13:56:44,831 - distributed.scheduler - INFO - Scheduler closing all comms
-b'2022-08-26 13:56:44,772 - distributed.dask_worker - ERROR - Failed to launch worker.  You cannot use the --no-nanny argument when n_workers > 1.\n'
------- stdout: returncode 1, ['/home/matthew/pkgsrc/install.20220728/bin/dask-worker', 'tcp://127.0.0.1:42511', '--nworkers=2', '--no-nanny'] ------
-Exception ignored in atexit callback: <function close_clusters at 0x564ea812c1d0>
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/distributed/deploy/spec.py", line 673, in close_clusters
-    for cluster in list(SpecCluster._instances):
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/_weakrefset.py", line 64, in __iter__
-    with _IterationGuard(self):
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/_weakrefset.py", line 17, in __init__
-    def __init__(self, weakcontainer):
-KeyboardInterrupt: 
-
-PASSED
-distributed/cli/tests/test_dask_worker.py::test_nworkers_negative SKIPPED
-distributed/cli/tests/test_dask_worker.py::test_nworkers_auto SKIPPED
-distributed/cli/tests/test_dask_worker.py::test_nworkers_expands_name SKIPPED
-distributed/cli/tests/test_dask_worker.py::test_worker_cli_nprocs_renamed_to_nworkers SKIPPED
-distributed/cli/tests/test_dask_worker.py::test_worker_cli_nworkers_with_nprocs_is_an_error 2022-08-26 13:56:44,966 - distributed.scheduler - INFO - State start
-2022-08-26 13:56:44,968 - distributed.scheduler - INFO - Clear task state
-2022-08-26 13:56:44,968 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:33443
-2022-08-26 13:56:44,968 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:40613
-2022-08-26 13:56:45,392 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 13:56:45,392 - distributed.scheduler - INFO - Scheduler closing all comms
-b'2022-08-26 13:56:45,332 - distributed.dask_worker - ERROR - Both --nprocs and --nworkers were specified. Use --nworkers only.\n'
------- stdout: returncode 1, ['/home/matthew/pkgsrc/install.20220728/bin/dask-worker', 'tcp://127.0.0.1:33443', '--nprocs=2', '--nworkers=2'] ------
-Exception ignored in atexit callback: <function _close_global_client at 0x5587d79e2100>
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/distributed/client.py", line 5386, in _close_global_client
-    c = _get_global_client()
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/distributed/client.py", line 122, in _get_global_client
-    L = sorted(list(_global_clients), reverse=True)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/weakref.py", line 222, in keys
-    with _IterationGuard(self):
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/_weakrefset.py", line 27, in __exit__
-    def __exit__(self, e, t, b):
-KeyboardInterrupt: 
-
-PASSED
-distributed/cli/tests/test_dask_worker.py::test_contact_listen_address[tcp://0.0.0.0:---nanny] SKIPPED
-distributed/cli/tests/test_dask_worker.py::test_contact_listen_address[tcp://0.0.0.0:---no-nanny] SKIPPED
-distributed/cli/tests/test_dask_worker.py::test_contact_listen_address[tcp://127.0.0.2:---nanny] SKIPPED
-distributed/cli/tests/test_dask_worker.py::test_contact_listen_address[tcp://127.0.0.2:---no-nanny] SKIPPED
-distributed/cli/tests/test_dask_worker.py::test_listen_address_ipv6[tcp://:---nanny] SKIPPED
-distributed/cli/tests/test_dask_worker.py::test_listen_address_ipv6[tcp://:---no-nanny] SKIPPED
-distributed/cli/tests/test_dask_worker.py::test_listen_address_ipv6[tcp://[::1]:---nanny] SKIPPED
-distributed/cli/tests/test_dask_worker.py::test_listen_address_ipv6[tcp://[::1]:---no-nanny] SKIPPED
-distributed/cli/tests/test_dask_worker.py::test_respect_host_listen_address[127.0.0.2---nanny] SKIPPED
-distributed/cli/tests/test_dask_worker.py::test_respect_host_listen_address[127.0.0.2---no-nanny] SKIPPED
-distributed/cli/tests/test_dask_worker.py::test_respect_host_listen_address[0.0.0.0---nanny] SKIPPED
-distributed/cli/tests/test_dask_worker.py::test_respect_host_listen_address[0.0.0.0---no-nanny] SKIPPED
-distributed/cli/tests/test_dask_worker.py::test_dashboard_non_standard_ports SKIPPED
-distributed/cli/tests/test_dask_worker.py::test_version_option PASSED
-distributed/cli/tests/test_dask_worker.py::test_worker_timeout[True] SKIPPED
-distributed/cli/tests/test_dask_worker.py::test_worker_timeout[False] SKIPPED
-distributed/cli/tests/test_dask_worker.py::test_bokeh_deprecation PASSED
-distributed/cli/tests/test_dask_worker.py::test_integer_names SKIPPED
-distributed/cli/tests/test_dask_worker.py::test_worker_class[--nanny] SKIPPED
-distributed/cli/tests/test_dask_worker.py::test_worker_class[--no-nanny] SKIPPED
-distributed/cli/tests/test_dask_worker.py::test_preload_config SKIPPED
-distributed/cli/tests/test_dask_worker.py::test_timeout[--nanny] SKIPPED
-distributed/cli/tests/test_dask_worker.py::test_timeout[--no-nanny] SKIPPED
-distributed/cli/tests/test_dask_worker.py::test_signal_handling[Signals.SIGINT---nanny] 2022-08-26 13:56:45,546 - distributed.scheduler - INFO - State start
-2022-08-26 13:56:45,547 - distributed.scheduler - INFO - Clear task state
-2022-08-26 13:56:45,548 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:46627
-2022-08-26 13:56:45,548 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:44319
-2022-08-26 13:56:45,551 - distributed.scheduler - INFO - Receive client connection: Client-985b5170-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:45,551 - distributed.core - INFO - Starting established connection
-2022-08-26 13:56:46,713 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42089', status: init, memory: 0, processing: 0>
-2022-08-26 13:56:46,713 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42089
-2022-08-26 13:56:46,713 - distributed.core - INFO - Starting established connection
-2022-08-26 13:56:47,026 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:42089', status: closing, memory: 0, processing: 0>
-2022-08-26 13:56:47,026 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:42089
-2022-08-26 13:56:47,027 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 13:56:47,027 - distributed.scheduler - INFO - Remove client Client-985b5170-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:47,027 - distributed.scheduler - INFO - Remove client Client-985b5170-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:47,027 - distributed.scheduler - INFO - Close client connection: Client-985b5170-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:47,028 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 13:56:47,028 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/cli/tests/test_dask_worker.py::test_signal_handling[Signals.SIGINT---no-nanny] 2022-08-26 13:56:47,161 - distributed.scheduler - INFO - State start
-2022-08-26 13:56:47,162 - distributed.scheduler - INFO - Clear task state
-2022-08-26 13:56:47,162 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:38997
-2022-08-26 13:56:47,162 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:38459
-2022-08-26 13:56:47,166 - distributed.scheduler - INFO - Receive client connection: Client-9951b959-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:47,166 - distributed.core - INFO - Starting established connection
-2022-08-26 13:56:47,915 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:43775', status: init, memory: 0, processing: 0>
-2022-08-26 13:56:47,915 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:43775
-2022-08-26 13:56:47,915 - distributed.core - INFO - Starting established connection
-2022-08-26 13:56:48,161 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:43775', status: running, memory: 0, processing: 0>
-2022-08-26 13:56:48,161 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:43775
-2022-08-26 13:56:48,161 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 13:56:48,162 - distributed.scheduler - INFO - Remove client Client-9951b959-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:48,162 - distributed.scheduler - INFO - Remove client Client-9951b959-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:48,163 - distributed.scheduler - INFO - Close client connection: Client-9951b959-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:48,164 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 13:56:48,164 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/cli/tests/test_dask_worker.py::test_signal_handling[Signals.SIGTERM---nanny] 2022-08-26 13:56:48,296 - distributed.scheduler - INFO - State start
-2022-08-26 13:56:48,297 - distributed.scheduler - INFO - Clear task state
-2022-08-26 13:56:48,298 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:36339
-2022-08-26 13:56:48,298 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:45419
-2022-08-26 13:56:48,301 - distributed.scheduler - INFO - Receive client connection: Client-99fef095-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:48,301 - distributed.core - INFO - Starting established connection
-2022-08-26 13:56:49,445 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39053', status: init, memory: 0, processing: 0>
-2022-08-26 13:56:49,446 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39053
-2022-08-26 13:56:49,446 - distributed.core - INFO - Starting established connection
-2022-08-26 13:56:49,783 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:39053', status: closing, memory: 0, processing: 0>
-2022-08-26 13:56:49,783 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:39053
-2022-08-26 13:56:49,783 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 13:56:49,784 - distributed.scheduler - INFO - Remove client Client-99fef095-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:49,784 - distributed.scheduler - INFO - Remove client Client-99fef095-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:49,784 - distributed.scheduler - INFO - Close client connection: Client-99fef095-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:49,784 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 13:56:49,785 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/cli/tests/test_dask_worker.py::test_signal_handling[Signals.SIGTERM---no-nanny] 2022-08-26 13:56:49,922 - distributed.scheduler - INFO - State start
-2022-08-26 13:56:49,924 - distributed.scheduler - INFO - Clear task state
-2022-08-26 13:56:49,924 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:33453
-2022-08-26 13:56:49,924 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:44715
-2022-08-26 13:56:49,927 - distributed.scheduler - INFO - Receive client connection: Client-9af71f1f-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:49,928 - distributed.core - INFO - Starting established connection
-2022-08-26 13:56:50,674 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37425', status: init, memory: 0, processing: 0>
-2022-08-26 13:56:50,675 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37425
-2022-08-26 13:56:50,675 - distributed.core - INFO - Starting established connection
-2022-08-26 13:56:50,922 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:37425', status: running, memory: 0, processing: 0>
-2022-08-26 13:56:50,922 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:37425
-2022-08-26 13:56:50,922 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 13:56:50,923 - distributed.scheduler - INFO - Remove client Client-9af71f1f-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:50,923 - distributed.scheduler - INFO - Remove client Client-9af71f1f-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:50,923 - distributed.scheduler - INFO - Close client connection: Client-9af71f1f-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:50,923 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 13:56:50,924 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/cli/tests/test_dask_worker.py::test_error_during_startup[--nanny] 2022-08-26 13:56:51,420 - distributed.scheduler - INFO - -----------------------------------------------
-2022-08-26 13:56:51,791 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 13:56:51,831 - distributed.scheduler - INFO - State start
-2022-08-26 13:56:51,833 - distributed.scheduler - INFO - -----------------------------------------------
-2022-08-26 13:56:51,833 - distributed.scheduler - INFO - Clear task state
-2022-08-26 13:56:51,833 - distributed.scheduler - INFO -   Scheduler at: tcp://192.168.1.159:38045
-2022-08-26 13:56:51,833 - distributed.scheduler - INFO -   dashboard at:                    :33519
-2022-08-26 13:56:52,224 - distributed.scheduler - INFO - Receive client connection: Client-9ba3ab12-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:52,225 - distributed.core - INFO - Starting established connection
-2022-08-26 13:56:52,593 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:36137'
-2022-08-26 13:56:52,965 - distributed.worker - INFO - Stopping worker
-2022-08-26 13:56:52,965 - distributed.worker - INFO - Closed worker has not yet started: Status.init
-2022-08-26 13:56:52,965 - distributed.nanny - ERROR - Failed to start worker
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/distributed/core.py", line 481, in start
-    await asyncio.wait_for(self.start_unsafe(), timeout=timeout)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/tasks.py", line 408, in wait_for
-    return await fut
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/distributed/worker.py", line 1315, in start_unsafe
-    await self.listen(start_address, **kwargs)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/distributed/core.py", line 659, in listen
-    listener = await listen(
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/distributed/comm/core.py", line 212, in _
-    await self.start()
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/distributed/comm/tcp.py", line 563, in start
-    sockets = netutil.bind_sockets(
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/tornado/netutil.py", line 162, in bind_sockets
-    sock.bind(sockaddr)
-OSError: [Errno 98] Address already in use
-
-The above exception was the direct cause of the following exception:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/distributed/nanny.py", line 892, in run
-    await worker
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/distributed/core.py", line 489, in start
-    raise RuntimeError(f"{type(self).__name__} failed to start.") from exc
-RuntimeError: Worker failed to start.
-2022-08-26 13:56:52,999 - distributed.nanny - ERROR - Failed to start process
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/distributed/core.py", line 481, in start
-    await asyncio.wait_for(self.start_unsafe(), timeout=timeout)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/tasks.py", line 408, in wait_for
-    return await fut
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/distributed/worker.py", line 1315, in start_unsafe
-    await self.listen(start_address, **kwargs)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/distributed/core.py", line 659, in listen
-    listener = await listen(
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/distributed/comm/core.py", line 212, in _
-    await self.start()
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/distributed/comm/tcp.py", line 563, in start
-    sockets = netutil.bind_sockets(
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/tornado/netutil.py", line 162, in bind_sockets
-    sock.bind(sockaddr)
-OSError: [Errno 98] Address already in use
-
-The above exception was the direct cause of the following exception:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/distributed/nanny.py", line 438, in instantiate
-    result = await self.process.start()
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/distributed/nanny.py", line 695, in start
-    msg = await self._wait_until_connected(uid)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/distributed/nanny.py", line 823, in _wait_until_connected
-    raise msg["exception"]
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/distributed/nanny.py", line 892, in run
-    await worker
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/distributed/core.py", line 489, in start
-    raise RuntimeError(f"{type(self).__name__} failed to start.") from exc
-RuntimeError: Worker failed to start.
-2022-08-26 13:56:53,000 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:36137'.
-2022-08-26 13:56:53,000 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 13:56:53,001 - distributed.nanny - INFO - Worker process 521291 was killed by signal 15
-2022-08-26 13:56:53,001 - distributed.dask_worker - INFO - End worker
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/distributed/core.py", line 481, in start
-    await asyncio.wait_for(self.start_unsafe(), timeout=timeout)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/tasks.py", line 408, in wait_for
-    return await fut
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/distributed/worker.py", line 1315, in start_unsafe
-    await self.listen(start_address, **kwargs)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/distributed/core.py", line 659, in listen
-    listener = await listen(
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/distributed/comm/core.py", line 212, in _
-    await self.start()
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/distributed/comm/tcp.py", line 563, in start
-    sockets = netutil.bind_sockets(
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/tornado/netutil.py", line 162, in bind_sockets
-    sock.bind(sockaddr)
-OSError: [Errno 98] Address already in use
-
-The above exception was the direct cause of the following exception:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/distributed/core.py", line 481, in start
-    await asyncio.wait_for(self.start_unsafe(), timeout=timeout)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/tasks.py", line 408, in wait_for
-    return await fut
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/distributed/nanny.py", line 364, in start_unsafe
-    response = await self.instantiate()
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/distributed/nanny.py", line 438, in instantiate
-    result = await self.process.start()
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/distributed/nanny.py", line 695, in start
-    msg = await self._wait_until_connected(uid)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/distributed/nanny.py", line 823, in _wait_until_connected
-    raise msg["exception"]
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/distributed/nanny.py", line 892, in run
-    await worker
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/distributed/core.py", line 489, in start
-    raise RuntimeError(f"{type(self).__name__} failed to start.") from exc
-RuntimeError: Worker failed to start.
-
-The above exception was the direct cause of the following exception:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/install.20220728/bin/dask-worker", line 33, in <module>
-    sys.exit(load_entry_point('distributed==2022.8.1', 'console_scripts', 'dask-worker')())
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/click/core.py", line 1130, in __call__
-    return self.main(*args, **kwargs)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/click/core.py", line 1055, in main
-    rv = self.invoke(ctx)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/click/core.py", line 1404, in invoke
-    return ctx.invoke(self.callback, **ctx.params)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/click/core.py", line 760, in invoke
-    return __callback(*args, **kwargs)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/distributed/cli/dask_worker.py", line 500, in main
-    asyncio.run(run())
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/runners.py", line 44, in run
-    return loop.run_until_complete(main)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/base_events.py", line 646, in run_until_complete
-    return future.result()
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/distributed/cli/dask_worker.py", line 497, in run
-    [task.result() for task in done]
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/distributed/cli/dask_worker.py", line 497, in <listcomp>
-    [task.result() for task in done]
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/distributed/cli/dask_worker.py", line 472, in wait_for_nannies_to_finish
-    await asyncio.gather(*nannies)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/tasks.py", line 650, in _wrap_awaitable
-    return (yield from awaitable.__await__())
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/distributed/core.py", line 489, in start
-    raise RuntimeError(f"{type(self).__name__} failed to start.") from exc
-RuntimeError: Nanny failed to start.
-2022-08-26 13:56:53,092 - distributed.scheduler - INFO - Remove client Client-9ba3ab12-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:53,093 - distributed.scheduler - INFO - Remove client Client-9ba3ab12-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:53,093 - distributed.scheduler - INFO - Close client connection: Client-9ba3ab12-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:53,093 - distributed._signals - INFO - Received signal SIGINT (2)
-2022-08-26 13:56:53,093 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 13:56:53,094 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 13:56:53,094 - distributed.scheduler - INFO - Stopped scheduler at 'tcp://192.168.1.159:38045'
-2022-08-26 13:56:53,094 - distributed.scheduler - INFO - End scheduler
-PASSED
-distributed/cli/tests/test_dask_worker.py::test_error_during_startup[--no-nanny] 2022-08-26 13:56:53,682 - distributed.scheduler - INFO - -----------------------------------------------
-2022-08-26 13:56:54,056 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 13:56:54,097 - distributed.scheduler - INFO - State start
-2022-08-26 13:56:54,098 - distributed.scheduler - INFO - -----------------------------------------------
-2022-08-26 13:56:54,099 - distributed.scheduler - INFO - Clear task state
-2022-08-26 13:56:54,099 - distributed.scheduler - INFO -   Scheduler at: tcp://192.168.1.159:57645
-2022-08-26 13:56:54,099 - distributed.scheduler - INFO -   dashboard at:                    :36117
-2022-08-26 13:56:55,649 - distributed.scheduler - INFO - Receive client connection: Client-9cfc98cc-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:55,650 - distributed.core - INFO - Starting established connection
-2022-08-26 13:56:56,019 - distributed.worker - INFO - Stopping worker
-2022-08-26 13:56:56,019 - distributed.worker - INFO - Closed worker has not yet started: Status.init
-2022-08-26 13:56:56,020 - distributed.dask_worker - INFO - End worker
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/distributed/core.py", line 481, in start
-    await asyncio.wait_for(self.start_unsafe(), timeout=timeout)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/tasks.py", line 408, in wait_for
-    return await fut
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/distributed/worker.py", line 1315, in start_unsafe
-    await self.listen(start_address, **kwargs)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/distributed/core.py", line 659, in listen
-    listener = await listen(
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/distributed/comm/core.py", line 212, in _
-    await self.start()
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/distributed/comm/tcp.py", line 563, in start
-    sockets = netutil.bind_sockets(
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/tornado/netutil.py", line 162, in bind_sockets
-    sock.bind(sockaddr)
-OSError: [Errno 98] Address already in use
-
-The above exception was the direct cause of the following exception:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/install.20220728/bin/dask-worker", line 33, in <module>
-    sys.exit(load_entry_point('distributed==2022.8.1', 'console_scripts', 'dask-worker')())
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/click/core.py", line 1130, in __call__
-    return self.main(*args, **kwargs)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/click/core.py", line 1055, in main
-    rv = self.invoke(ctx)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/click/core.py", line 1404, in invoke
-    return ctx.invoke(self.callback, **ctx.params)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/click/core.py", line 760, in invoke
-    return __callback(*args, **kwargs)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/distributed/cli/dask_worker.py", line 500, in main
-    asyncio.run(run())
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/runners.py", line 44, in run
-    return loop.run_until_complete(main)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/base_events.py", line 646, in run_until_complete
-    return future.result()
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/distributed/cli/dask_worker.py", line 497, in run
-    [task.result() for task in done]
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/distributed/cli/dask_worker.py", line 497, in <listcomp>
-    [task.result() for task in done]
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/distributed/cli/dask_worker.py", line 472, in wait_for_nannies_to_finish
-    await asyncio.gather(*nannies)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/tasks.py", line 650, in _wrap_awaitable
-    return (yield from awaitable.__await__())
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/distributed/core.py", line 489, in start
-    raise RuntimeError(f"{type(self).__name__} failed to start.") from exc
-RuntimeError: Worker failed to start.
-2022-08-26 13:56:56,116 - distributed.scheduler - INFO - Remove client Client-9cfc98cc-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:56,117 - distributed.scheduler - INFO - Remove client Client-9cfc98cc-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:56,117 - distributed.scheduler - INFO - Close client connection: Client-9cfc98cc-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:56,117 - distributed._signals - INFO - Received signal SIGINT (2)
-2022-08-26 13:56:56,117 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 13:56:56,117 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 13:56:56,118 - distributed.scheduler - INFO - Stopped scheduler at 'tcp://192.168.1.159:57645'
-2022-08-26 13:56:56,118 - distributed.scheduler - INFO - End scheduler
-PASSED
-distributed/cli/tests/test_tls_cli.py::test_basic 2022-08-26 13:56:56,731 - distributed.scheduler - INFO - -----------------------------------------------
-2022-08-26 13:56:56,734 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 13:56:56,736 - distributed.scheduler - INFO - State start
-2022-08-26 13:56:56,738 - distributed.scheduler - INFO - -----------------------------------------------
-2022-08-26 13:56:56,739 - distributed.scheduler - INFO - Clear task state
-2022-08-26 13:56:56,739 - distributed.scheduler - INFO -   Scheduler at:  tls://192.168.1.159:8786
-2022-08-26 13:56:56,739 - distributed.scheduler - INFO -   dashboard at:                     :8787
-2022-08-26 13:56:56,743 - distributed.nanny - INFO -         Start Nanny at: 'tls://127.0.0.1:37063'
-2022-08-26 13:56:57,036 - distributed.scheduler - INFO - Receive client connection: Client-9eca56d0-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:57,122 - distributed.worker - INFO -       Start worker at:      tls://127.0.0.1:33945
-2022-08-26 13:56:57,122 - distributed.worker - INFO -          Listening to:      tls://127.0.0.1:33945
-2022-08-26 13:56:57,122 - distributed.worker - INFO -          dashboard at:            127.0.0.1:37547
-2022-08-26 13:56:57,122 - distributed.worker - INFO - Waiting to connect to:       tls://127.0.0.1:8786
-2022-08-26 13:56:57,122 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:56:57,122 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 13:56:57,122 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 13:56:57,122 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-_55lh_54
-2022-08-26 13:56:57,122 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:56:57,231 - distributed.core - INFO - Starting established connection
-2022-08-26 13:56:57,404 - distributed.scheduler - INFO - Register worker <WorkerState 'tls://127.0.0.1:33945', status: init, memory: 0, processing: 0>
-2022-08-26 13:56:57,404 - distributed.scheduler - INFO - Starting worker compute stream, tls://127.0.0.1:33945
-2022-08-26 13:56:57,404 - distributed.core - INFO - Starting established connection
-2022-08-26 13:56:57,404 - distributed.worker - INFO -         Registered to:       tls://127.0.0.1:8786
-2022-08-26 13:56:57,405 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:56:57,405 - distributed.core - INFO - Starting established connection
-2022-08-26 13:56:57,436 - distributed.scheduler - INFO - Remove client Client-9eca56d0-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:57,436 - distributed.scheduler - INFO - Remove client Client-9eca56d0-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:57,436 - distributed.scheduler - INFO - Close client connection: Client-9eca56d0-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:57,437 - distributed._signals - INFO - Received signal SIGINT (2)
-2022-08-26 13:56:57,437 - distributed.nanny - INFO - Closing Nanny at 'tls://127.0.0.1:37063'.
-2022-08-26 13:56:57,437 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 13:56:57,438 - distributed.worker - INFO - Stopping worker at tls://127.0.0.1:33945
-2022-08-26 13:56:57,438 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-eea40942-9277-4a2a-a3de-00b5613fa107 Address tls://127.0.0.1:33945 Status: Status.closing
-2022-08-26 13:56:57,439 - distributed.scheduler - INFO - Remove worker <WorkerState 'tls://127.0.0.1:33945', status: closing, memory: 0, processing: 0>
-2022-08-26 13:56:57,439 - distributed.core - INFO - Removing comms to tls://127.0.0.1:33945
-2022-08-26 13:56:57,439 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 13:56:57,556 - distributed.dask_worker - INFO - End worker
-2022-08-26 13:56:57,650 - distributed._signals - INFO - Received signal SIGINT (2)
-2022-08-26 13:56:57,651 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 13:56:57,651 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 13:56:57,651 - distributed.scheduler - INFO - Stopped scheduler at 'tls://192.168.1.159:8786'
-2022-08-26 13:56:57,651 - distributed.scheduler - INFO - End scheduler
-PASSED
-distributed/cli/tests/test_tls_cli.py::test_nanny 2022-08-26 13:56:58,199 - distributed.scheduler - INFO - -----------------------------------------------
-2022-08-26 13:56:58,203 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 13:56:58,205 - distributed.scheduler - INFO - State start
-2022-08-26 13:56:58,207 - distributed.scheduler - INFO - -----------------------------------------------
-2022-08-26 13:56:58,208 - distributed.scheduler - INFO - Clear task state
-2022-08-26 13:56:58,208 - distributed.scheduler - INFO -   Scheduler at: tls://192.168.1.159:44601
-2022-08-26 13:56:58,208 - distributed.scheduler - INFO -   dashboard at:                     :8787
-2022-08-26 13:56:58,213 - distributed.nanny - INFO -         Start Nanny at: 'tls://127.0.0.1:44179'
-2022-08-26 13:56:58,270 - distributed.scheduler - INFO - Receive client connection: Client-9fabfe09-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:58,459 - distributed.core - INFO - Starting established connection
-2022-08-26 13:56:58,602 - distributed.worker - INFO -       Start worker at:      tls://127.0.0.1:40729
-2022-08-26 13:56:58,602 - distributed.worker - INFO -          Listening to:      tls://127.0.0.1:40729
-2022-08-26 13:56:58,602 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43313
-2022-08-26 13:56:58,602 - distributed.worker - INFO - Waiting to connect to:      tls://127.0.0.1:44601
-2022-08-26 13:56:58,602 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:56:58,602 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 13:56:58,602 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 13:56:58,602 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-nr6jw8c3
-2022-08-26 13:56:58,602 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:56:58,775 - distributed.scheduler - INFO - Register worker <WorkerState 'tls://127.0.0.1:40729', status: init, memory: 0, processing: 0>
-2022-08-26 13:56:58,776 - distributed.scheduler - INFO - Starting worker compute stream, tls://127.0.0.1:40729
-2022-08-26 13:56:58,776 - distributed.core - INFO - Starting established connection
-2022-08-26 13:56:58,776 - distributed.worker - INFO -         Registered to:      tls://127.0.0.1:44601
-2022-08-26 13:56:58,776 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:56:58,777 - distributed.core - INFO - Starting established connection
-2022-08-26 13:56:58,867 - distributed.scheduler - INFO - Remove client Client-9fabfe09-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:58,867 - distributed.scheduler - INFO - Remove client Client-9fabfe09-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:58,867 - distributed.scheduler - INFO - Close client connection: Client-9fabfe09-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:56:58,867 - distributed._signals - INFO - Received signal SIGINT (2)
-2022-08-26 13:56:58,868 - distributed.nanny - INFO - Closing Nanny at 'tls://127.0.0.1:44179'.
-2022-08-26 13:56:58,868 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 13:56:58,868 - distributed.worker - INFO - Stopping worker at tls://127.0.0.1:40729
-2022-08-26 13:56:58,869 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-12a22e30-5e8f-416d-8753-9bc523e5193b Address tls://127.0.0.1:40729 Status: Status.closing
-2022-08-26 13:56:58,869 - distributed.scheduler - INFO - Remove worker <WorkerState 'tls://127.0.0.1:40729', status: closing, memory: 0, processing: 0>
-2022-08-26 13:56:58,869 - distributed.core - INFO - Removing comms to tls://127.0.0.1:40729
-2022-08-26 13:56:58,869 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 13:56:58,986 - distributed.dask_worker - INFO - End worker
-2022-08-26 13:56:59,081 - distributed._signals - INFO - Received signal SIGINT (2)
-2022-08-26 13:56:59,081 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 13:56:59,082 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 13:56:59,082 - distributed.scheduler - INFO - Stopped scheduler at 'tls://192.168.1.159:44601'
-2022-08-26 13:56:59,082 - distributed.scheduler - INFO - End scheduler
-PASSED
-distributed/cli/tests/test_tls_cli.py::test_separate_key_cert 2022-08-26 13:56:59,626 - distributed.scheduler - INFO - -----------------------------------------------
-2022-08-26 13:56:59,630 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 13:56:59,632 - distributed.scheduler - INFO - State start
-2022-08-26 13:56:59,634 - distributed.scheduler - INFO - -----------------------------------------------
-2022-08-26 13:56:59,634 - distributed.scheduler - INFO - Clear task state
-2022-08-26 13:56:59,635 - distributed.scheduler - INFO -   Scheduler at: tls://192.168.1.159:60273
-2022-08-26 13:56:59,635 - distributed.scheduler - INFO -   dashboard at:                     :8787
-2022-08-26 13:56:59,639 - distributed.nanny - INFO -         Start Nanny at: 'tls://127.0.0.1:40707'
-2022-08-26 13:57:00,019 - distributed.worker - INFO -       Start worker at:      tls://127.0.0.1:44449
-2022-08-26 13:57:00,019 - distributed.worker - INFO -          Listening to:      tls://127.0.0.1:44449
-2022-08-26 13:57:00,019 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42199
-2022-08-26 13:57:00,019 - distributed.worker - INFO - Waiting to connect to:      tls://127.0.0.1:60273
-2022-08-26 13:57:00,019 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:57:00,019 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 13:57:00,019 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 13:57:00,020 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-qvdavj06
-2022-08-26 13:57:00,020 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:57:00,204 - distributed.scheduler - INFO - Receive client connection: Client-a086a037-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:57:00,374 - distributed.core - INFO - Starting established connection
-2022-08-26 13:57:00,375 - distributed.scheduler - INFO - Register worker <WorkerState 'tls://127.0.0.1:44449', status: init, memory: 0, processing: 0>
-2022-08-26 13:57:00,375 - distributed.scheduler - INFO - Starting worker compute stream, tls://127.0.0.1:44449
-2022-08-26 13:57:00,375 - distributed.core - INFO - Starting established connection
-2022-08-26 13:57:00,375 - distributed.worker - INFO -         Registered to:      tls://127.0.0.1:60273
-2022-08-26 13:57:00,376 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:57:00,376 - distributed.core - INFO - Starting established connection
-2022-08-26 13:57:00,387 - distributed.scheduler - INFO - Remove client Client-a086a037-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:57:00,387 - distributed.scheduler - INFO - Remove client Client-a086a037-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:57:00,387 - distributed.scheduler - INFO - Close client connection: Client-a086a037-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:57:00,387 - distributed._signals - INFO - Received signal SIGINT (2)
-2022-08-26 13:57:00,388 - distributed.nanny - INFO - Closing Nanny at 'tls://127.0.0.1:40707'.
-2022-08-26 13:57:00,388 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 13:57:00,388 - distributed.worker - INFO - Stopping worker at tls://127.0.0.1:44449
-2022-08-26 13:57:00,389 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-4a0751b5-2303-405a-845e-debc15cca40c Address tls://127.0.0.1:44449 Status: Status.closing
-2022-08-26 13:57:00,389 - distributed.scheduler - INFO - Remove worker <WorkerState 'tls://127.0.0.1:44449', status: closing, memory: 0, processing: 0>
-2022-08-26 13:57:00,390 - distributed.core - INFO - Removing comms to tls://127.0.0.1:44449
-2022-08-26 13:57:00,390 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 13:57:00,516 - distributed.dask_worker - INFO - End worker
-2022-08-26 13:57:00,601 - distributed._signals - INFO - Received signal SIGINT (2)
-2022-08-26 13:57:00,601 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 13:57:00,602 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 13:57:00,602 - distributed.scheduler - INFO - Stopped scheduler at 'tls://192.168.1.159:60273'
-2022-08-26 13:57:00,602 - distributed.scheduler - INFO - End scheduler
-PASSED
-distributed/cli/tests/test_tls_cli.py::test_use_config_file 2022-08-26 13:57:01,138 - distributed.scheduler - INFO - -----------------------------------------------
-2022-08-26 13:57:01,142 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 13:57:01,144 - distributed.scheduler - INFO - State start
-2022-08-26 13:57:01,146 - distributed.scheduler - INFO - -----------------------------------------------
-2022-08-26 13:57:01,146 - distributed.scheduler - INFO - Clear task state
-2022-08-26 13:57:01,146 - distributed.scheduler - INFO -   Scheduler at: tls://192.168.1.159:49333
-2022-08-26 13:57:01,147 - distributed.scheduler - INFO -   dashboard at:                     :8787
-2022-08-26 13:57:01,157 - distributed.nanny - INFO -         Start Nanny at: 'tls://127.0.0.1:38299'
-2022-08-26 13:57:01,330 - distributed.scheduler - INFO - Receive client connection: Client-a16e6887-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:57:01,526 - distributed.core - INFO - Starting established connection
-2022-08-26 13:57:01,536 - distributed.worker - INFO -       Start worker at:      tls://127.0.0.1:43965
-2022-08-26 13:57:01,536 - distributed.worker - INFO -          Listening to:      tls://127.0.0.1:43965
-2022-08-26 13:57:01,536 - distributed.worker - INFO -          dashboard at:            127.0.0.1:44497
-2022-08-26 13:57:01,536 - distributed.worker - INFO - Waiting to connect to:      tls://127.0.0.1:49333
-2022-08-26 13:57:01,536 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:57:01,536 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 13:57:01,536 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 13:57:01,536 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ga3spc6x
-2022-08-26 13:57:01,536 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:57:01,709 - distributed.scheduler - INFO - Register worker <WorkerState 'tls://127.0.0.1:43965', status: init, memory: 0, processing: 0>
-2022-08-26 13:57:01,710 - distributed.scheduler - INFO - Starting worker compute stream, tls://127.0.0.1:43965
-2022-08-26 13:57:01,710 - distributed.core - INFO - Starting established connection
-2022-08-26 13:57:01,710 - distributed.worker - INFO -         Registered to:      tls://127.0.0.1:49333
-2022-08-26 13:57:01,710 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:57:01,711 - distributed.core - INFO - Starting established connection
-2022-08-26 13:57:01,731 - distributed.scheduler - INFO - Remove client Client-a16e6887-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:57:01,731 - distributed.scheduler - INFO - Remove client Client-a16e6887-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:57:01,732 - distributed.scheduler - INFO - Close client connection: Client-a16e6887-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:57:01,732 - distributed._signals - INFO - Received signal SIGINT (2)
-2022-08-26 13:57:01,732 - distributed.nanny - INFO - Closing Nanny at 'tls://127.0.0.1:38299'.
-2022-08-26 13:57:01,732 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 13:57:01,733 - distributed.worker - INFO - Stopping worker at tls://127.0.0.1:43965
-2022-08-26 13:57:01,734 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-97f38a88-1d86-456d-a350-a655ba979b91 Address tls://127.0.0.1:43965 Status: Status.closing
-2022-08-26 13:57:01,734 - distributed.scheduler - INFO - Remove worker <WorkerState 'tls://127.0.0.1:43965', status: closing, memory: 0, processing: 0>
-2022-08-26 13:57:01,734 - distributed.core - INFO - Removing comms to tls://127.0.0.1:43965
-2022-08-26 13:57:01,734 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 13:57:01,850 - distributed.dask_worker - INFO - End worker
-2022-08-26 13:57:01,946 - distributed._signals - INFO - Received signal SIGINT (2)
-2022-08-26 13:57:01,946 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 13:57:01,946 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 13:57:01,947 - distributed.scheduler - INFO - Stopped scheduler at 'tls://192.168.1.159:49333'
-2022-08-26 13:57:01,947 - distributed.scheduler - INFO - End scheduler
-PASSED
-distributed/comm/tests/test_comms.py::test_parse_host_port[tornado] PASSED
-distributed/comm/tests/test_comms.py::test_parse_host_port[asyncio] PASSED
-distributed/comm/tests/test_comms.py::test_unparse_host_port[tornado] PASSED
-distributed/comm/tests/test_comms.py::test_unparse_host_port[asyncio] PASSED
-distributed/comm/tests/test_comms.py::test_get_address_host[tornado] PASSED
-distributed/comm/tests/test_comms.py::test_get_address_host[asyncio] PASSED
-distributed/comm/tests/test_comms.py::test_resolve_address[tornado] PASSED
-distributed/comm/tests/test_comms.py::test_resolve_address[asyncio] PASSED
-distributed/comm/tests/test_comms.py::test_get_local_address_for[tornado] PASSED
-distributed/comm/tests/test_comms.py::test_get_local_address_for[asyncio] PASSED
-distributed/comm/tests/test_comms.py::test_tcp_listener_does_not_call_handler_on_handshake_error[tornado] 2022-08-26 13:57:02,137 - distributed.comm.tcp - INFO - Connection from tcp://127.0.0.1:54772 closed before handshake completed
-PASSED
-distributed/comm/tests/test_comms.py::test_tcp_listener_does_not_call_handler_on_handshake_error[asyncio] PASSED
-distributed/comm/tests/test_comms.py::test_tcp_specific[tornado] PASSED
-distributed/comm/tests/test_comms.py::test_tcp_specific[asyncio] PASSED
-distributed/comm/tests/test_comms.py::test_tls_specific[tornado] PASSED
-distributed/comm/tests/test_comms.py::test_tls_specific[asyncio] PASSED
-distributed/comm/tests/test_comms.py::test_comm_failure_threading[tornado] PASSED
-distributed/comm/tests/test_comms.py::test_comm_failure_threading[asyncio] SKIPPED
-distributed/comm/tests/test_comms.py::test_inproc_specific_same_thread PASSED
-distributed/comm/tests/test_comms.py::test_inproc_specific_different_threads PASSED
-distributed/comm/tests/test_comms.py::test_inproc_continues_listening_after_handshake_error PASSED
-distributed/comm/tests/test_comms.py::test_inproc_handshakes_concurrently PASSED
-distributed/comm/tests/test_comms.py::test_ucx_client_server SKIPPED
-distributed/comm/tests/test_comms.py::test_default_client_server_ipv4[tornado] PASSED
-distributed/comm/tests/test_comms.py::test_default_client_server_ipv4[asyncio] PASSED
-distributed/comm/tests/test_comms.py::test_default_client_server_ipv6[tornado] PASSED
-distributed/comm/tests/test_comms.py::test_default_client_server_ipv6[asyncio] PASSED
-distributed/comm/tests/test_comms.py::test_tcp_client_server_ipv4[tornado] PASSED
-distributed/comm/tests/test_comms.py::test_tcp_client_server_ipv4[asyncio] PASSED
-distributed/comm/tests/test_comms.py::test_tcp_client_server_ipv6[tornado] PASSED
-distributed/comm/tests/test_comms.py::test_tcp_client_server_ipv6[asyncio] PASSED
-distributed/comm/tests/test_comms.py::test_tls_client_server_ipv4[tornado] PASSED
-distributed/comm/tests/test_comms.py::test_tls_client_server_ipv4[asyncio] PASSED
-distributed/comm/tests/test_comms.py::test_tls_client_server_ipv6[tornado] PASSED
-distributed/comm/tests/test_comms.py::test_tls_client_server_ipv6[asyncio] PASSED
-distributed/comm/tests/test_comms.py::test_inproc_client_server PASSED
-distributed/comm/tests/test_comms.py::test_tls_reject_certificate[tornado] 2022-08-26 13:57:07,011 - distributed.comm.tcp - WARNING - Listener on 'tls://0.0.0.0:39413': TLS handshake failed with remote 'tls://192.168.1.159:35236': [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate (_ssl.c:997)
-2022-08-26 13:57:07,017 - distributed.comm.tcp - WARNING - Listener on 'tls://0.0.0.0:38155': TLS handshake failed with remote 'tls://192.168.1.159:37224': [SSL: TLSV1_ALERT_UNKNOWN_CA] tlsv1 alert unknown ca (_ssl.c:997)
-PASSED
-distributed/comm/tests/test_comms.py::test_tls_reject_certificate[asyncio] PASSED
-distributed/comm/tests/test_comms.py::test_tcp_comm_closed_implicit[tornado] PASSED
-distributed/comm/tests/test_comms.py::test_tcp_comm_closed_implicit[asyncio] PASSED
-distributed/comm/tests/test_comms.py::test_tls_comm_closed_implicit[tornado] PASSED
-distributed/comm/tests/test_comms.py::test_tls_comm_closed_implicit[asyncio] PASSED
-distributed/comm/tests/test_comms.py::test_inproc_comm_closed_implicit PASSED
-distributed/comm/tests/test_comms.py::test_tcp_comm_closed_explicit[tornado] PASSED
-distributed/comm/tests/test_comms.py::test_tcp_comm_closed_explicit[asyncio] PASSED
-distributed/comm/tests/test_comms.py::test_tls_comm_closed_explicit[tornado] PASSED
-distributed/comm/tests/test_comms.py::test_tls_comm_closed_explicit[asyncio] PASSED
-distributed/comm/tests/test_comms.py::test_inproc_comm_closed_explicit PASSED
-distributed/comm/tests/test_comms.py::test_inproc_comm_closed_explicit_2 PASSED
-distributed/comm/tests/test_comms.py::test_comm_closed_on_write_error[tornado-BufferError] PASSED
-distributed/comm/tests/test_comms.py::test_comm_closed_on_write_error[tornado-CustomBase] PASSED
-distributed/comm/tests/test_comms.py::test_comm_closed_on_write_error[asyncio-BufferError] SKIPPED
-distributed/comm/tests/test_comms.py::test_comm_closed_on_write_error[asyncio-CustomBase] SKIPPED
-distributed/comm/tests/test_comms.py::test_comm_closed_on_read_error[tornado] PASSED
-distributed/comm/tests/test_comms.py::test_comm_closed_on_read_error[asyncio] SKIPPED
-distributed/comm/tests/test_comms.py::test_retry_connect[tornado] PASSED
-distributed/comm/tests/test_comms.py::test_retry_connect[asyncio] PASSED
-distributed/comm/tests/test_comms.py::test_handshake_slow_comm[tornado] 2022-08-26 13:57:10,764 - distributed.comm.tcp - WARNING - Closing dangling stream in <TCP  local=tcp://127.0.0.1:1234 remote=tcp://127.0.0.1:46792>
-2022-08-26 13:57:11,366 - distributed.comm.tcp - INFO - Connection from tcp://127.0.0.1:46794 closed before handshake completed
-PASSED
-distributed/comm/tests/test_comms.py::test_handshake_slow_comm[asyncio] 2022-08-26 13:57:12,873 - distributed.comm.asyncio_tcp - WARNING - Closing dangling comm `<TCP  local=tcp://127.0.0.1:1234 remote=tcp://127.0.0.1:46796>`
-PASSED
-distributed/comm/tests/test_comms.py::test_tcp_connect_timeout[tornado] PASSED
-distributed/comm/tests/test_comms.py::test_tcp_connect_timeout[asyncio] PASSED
-distributed/comm/tests/test_comms.py::test_inproc_connect_timeout PASSED
-distributed/comm/tests/test_comms.py::test_tcp_many_listeners[tornado] PASSED
-distributed/comm/tests/test_comms.py::test_tcp_many_listeners[asyncio] PASSED
-distributed/comm/tests/test_comms.py::test_inproc_many_listeners PASSED
-distributed/comm/tests/test_comms.py::test_tcp_deserialize[tornado] PASSED
-distributed/comm/tests/test_comms.py::test_tcp_deserialize[asyncio] PASSED
-distributed/comm/tests/test_comms.py::test_inproc_deserialize PASSED
-distributed/comm/tests/test_comms.py::test_inproc_deserialize_roundtrip 2022-08-26 13:57:14,131 - distributed.comm.inproc - WARNING - Closing dangling queue in <InProc  local=inproc://192.168.1.159/518557/224 remote=inproc://192.168.1.159/518557/223>
-2022-08-26 13:57:14,131 - distributed.comm.inproc - WARNING - Closing dangling queue in <InProc  local=inproc://192.168.1.159/518557/223 remote=inproc://192.168.1.159/518557/224>
-2022-08-26 13:57:14,131 - distributed.comm.inproc - WARNING - Closing dangling queue in <InProc  local=inproc://192.168.1.159/518557/226 remote=inproc://192.168.1.159/518557/225>
-2022-08-26 13:57:14,131 - distributed.comm.inproc - WARNING - Closing dangling queue in <InProc  local=inproc://192.168.1.159/518557/225 remote=inproc://192.168.1.159/518557/226>
-PASSED
-distributed/comm/tests/test_comms.py::test_tcp_deserialize_roundtrip[tornado] 2022-08-26 13:57:14,157 - distributed.comm.tcp - WARNING - Closing dangling stream in <TCP  local=tcp://192.168.1.159:39424 remote=tcp://192.168.1.159:32863>
-2022-08-26 13:57:14,157 - distributed.comm.tcp - WARNING - Closing dangling stream in <TCP  local=tcp://192.168.1.159:32863 remote=tcp://192.168.1.159:39424>
-2022-08-26 13:57:14,168 - distributed.comm.tcp - WARNING - Closing dangling stream in <TCP  local=tcp://192.168.1.159:33648 remote=tcp://192.168.1.159:41851>
-2022-08-26 13:57:14,168 - distributed.comm.tcp - WARNING - Closing dangling stream in <TCP  local=tcp://192.168.1.159:41851 remote=tcp://192.168.1.159:33648>
-PASSED
-distributed/comm/tests/test_comms.py::test_tcp_deserialize_roundtrip[asyncio] 2022-08-26 13:57:14,195 - distributed.comm.asyncio_tcp - WARNING - Closing dangling comm `<TCP  local=tcp://192.168.1.159:44834 remote=tcp://192.168.1.159:52553>`
-2022-08-26 13:57:14,195 - distributed.comm.asyncio_tcp - WARNING - Closing dangling comm `<TCP  local=tcp://192.168.1.159:52553 remote=tcp://192.168.1.159:44834>`
-2022-08-26 13:57:14,206 - distributed.comm.asyncio_tcp - WARNING - Closing dangling comm `<TCP  local=tcp://192.168.1.159:58710 remote=tcp://192.168.1.159:45979>`
-2022-08-26 13:57:14,206 - distributed.comm.asyncio_tcp - WARNING - Closing dangling comm `<TCP  local=tcp://192.168.1.159:45979 remote=tcp://192.168.1.159:58710>`
-PASSED
-distributed/comm/tests/test_comms.py::test_tcp_deserialize_eoferror[tornado] 2022-08-26 13:57:14,213 - distributed.protocol.pickle - INFO - Failed to deserialize <memory at 0x56403ddc36d0>
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/pickle.py", line 73, in loads
-    return pickle.loads(x)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tests/test_comms.py", line 1258, in _raise_eoferror
-    raise EOFError
-EOFError
-2022-08-26 13:57:14,213 - distributed.protocol.core - CRITICAL - Failed to deserialize
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/core.py", line 158, in loads
-    return msgpack.loads(
-  File "msgpack/_unpacker.pyx", line 194, in msgpack._cmsgpack.unpackb
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/core.py", line 138, in _decode_default
-    return merge_and_deserialize(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/serialize.py", line 497, in merge_and_deserialize
-    return deserialize(header, merged_frames, deserializers=deserializers)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/serialize.py", line 426, in deserialize
-    return loads(header, frames)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/serialize.py", line 96, in pickle_loads
-    return pickle.loads(x, buffers=buffers)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/pickle.py", line 73, in loads
-    return pickle.loads(x)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tests/test_comms.py", line 1258, in _raise_eoferror
-    raise EOFError
-EOFError
-2022-08-26 13:57:14,214 - distributed.comm.utils - ERROR - truncated data stream (193 bytes): [<memory at 0x56403f47d3e0>, <memory at 0x56403ea46fd0>, <memory at 0x56403ea47090>]
-PASSED
-distributed/comm/tests/test_comms.py::test_tcp_deserialize_eoferror[asyncio] 2022-08-26 13:57:14,219 - distributed.protocol.pickle - INFO - Failed to deserialize <memory at 0x56403ceb99e0>
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/pickle.py", line 73, in loads
-    return pickle.loads(x)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tests/test_comms.py", line 1258, in _raise_eoferror
-    raise EOFError
-EOFError
-2022-08-26 13:57:14,219 - distributed.protocol.core - CRITICAL - Failed to deserialize
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/core.py", line 158, in loads
-    return msgpack.loads(
-  File "msgpack/_unpacker.pyx", line 194, in msgpack._cmsgpack.unpackb
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/core.py", line 138, in _decode_default
-    return merge_and_deserialize(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/serialize.py", line 497, in merge_and_deserialize
-    return deserialize(header, merged_frames, deserializers=deserializers)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/serialize.py", line 426, in deserialize
-    return loads(header, frames)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/serialize.py", line 96, in pickle_loads
-    return pickle.loads(x, buffers=buffers)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/pickle.py", line 73, in loads
-    return pickle.loads(x)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tests/test_comms.py", line 1258, in _raise_eoferror
-    raise EOFError
-EOFError
-2022-08-26 13:57:14,219 - distributed.comm.utils - ERROR - truncated data stream (193 bytes): [<memory at 0x56403d6c0150>, <memory at 0x56403ceb9aa0>, <memory at 0x56403d6bfe90>]
-PASSED
-distributed/comm/tests/test_comms.py::test_tcp_repr[tornado] PASSED
-distributed/comm/tests/test_comms.py::test_tcp_repr[asyncio] PASSED
-distributed/comm/tests/test_comms.py::test_tls_repr[tornado] PASSED
-distributed/comm/tests/test_comms.py::test_tls_repr[asyncio] PASSED
-distributed/comm/tests/test_comms.py::test_inproc_repr PASSED
-distributed/comm/tests/test_comms.py::test_tcp_adresses[tornado] PASSED
-distributed/comm/tests/test_comms.py::test_tcp_adresses[asyncio] PASSED
-distributed/comm/tests/test_comms.py::test_tls_adresses[tornado] PASSED
-distributed/comm/tests/test_comms.py::test_tls_adresses[asyncio] PASSED
-distributed/comm/tests/test_comms.py::test_inproc_adresses PASSED
-distributed/comm/tests/test_comms.py::test_register_backend_entrypoint PASSED
-distributed/comm/tests/test_ws.py::test_registered PASSED
-distributed/comm/tests/test_ws.py::test_listen_connect PASSED
-distributed/comm/tests/test_ws.py::test_listen_connect_wss PASSED
-distributed/comm/tests/test_ws.py::test_expect_ssl_context PASSED
-distributed/comm/tests/test_ws.py::test_expect_scheduler_ssl_when_sharing_server 2022-08-26 13:57:15,114 - distributed.scheduler - INFO - State start
-2022-08-26 13:57:15,116 - distributed.scheduler - INFO - Clear task state
-2022-08-26 13:57:15,116 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 13:57:15,117 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/comm/tests/test_ws.py::test_roundtrip 2022-08-26 13:57:15,126 - distributed.scheduler - INFO - State start
-2022-08-26 13:57:15,127 - distributed.scheduler - INFO - Clear task state
-2022-08-26 13:57:15,128 - distributed.scheduler - INFO -   Scheduler at:  ws://192.168.1.159:39585
-2022-08-26 13:57:15,128 - distributed.scheduler - INFO -   dashboard at:                    :41581
-2022-08-26 13:57:15,133 - distributed.worker - INFO -       Start worker at:   ws://192.168.1.159:35721
-2022-08-26 13:57:15,133 - distributed.worker - INFO -          Listening to:   ws://192.168.1.159:35721
-2022-08-26 13:57:15,133 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 13:57:15,134 - distributed.worker - INFO -          dashboard at:        192.168.1.159:38821
-2022-08-26 13:57:15,134 - distributed.worker - INFO - Waiting to connect to:   ws://192.168.1.159:39585
-2022-08-26 13:57:15,134 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:57:15,134 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 13:57:15,134 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 13:57:15,134 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-yrewuh5b
-2022-08-26 13:57:15,134 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:57:15,134 - distributed.worker - INFO -       Start worker at:   ws://192.168.1.159:39723
-2022-08-26 13:57:15,134 - distributed.worker - INFO -          Listening to:   ws://192.168.1.159:39723
-2022-08-26 13:57:15,134 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 13:57:15,134 - distributed.worker - INFO -          dashboard at:        192.168.1.159:38237
-2022-08-26 13:57:15,134 - distributed.worker - INFO - Waiting to connect to:   ws://192.168.1.159:39585
-2022-08-26 13:57:15,134 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:57:15,134 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 13:57:15,135 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 13:57:15,135 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-s1v9qgb1
-2022-08-26 13:57:15,135 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:57:15,141 - distributed.scheduler - INFO - Register worker <WorkerState 'ws://192.168.1.159:35721', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 13:57:15,142 - distributed.scheduler - INFO - Register worker <WorkerState 'ws://192.168.1.159:39723', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 13:57:15,143 - distributed.worker - INFO -         Registered to:   ws://192.168.1.159:39585
-2022-08-26 13:57:15,143 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:57:15,143 - distributed.scheduler - INFO - Starting worker compute stream, ws://192.168.1.159:35721
-2022-08-26 13:57:15,143 - distributed.core - INFO - Starting established connection
-2022-08-26 13:57:15,143 - distributed.worker - INFO -         Registered to:   ws://192.168.1.159:39585
-2022-08-26 13:57:15,143 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:57:15,143 - distributed.scheduler - INFO - Starting worker compute stream, ws://192.168.1.159:39723
-2022-08-26 13:57:15,143 - distributed.core - INFO - Starting established connection
-2022-08-26 13:57:15,144 - distributed.core - INFO - Starting established connection
-2022-08-26 13:57:15,144 - distributed.core - INFO - Starting established connection
-2022-08-26 13:57:15,161 - distributed.scheduler - INFO - Receive client connection: Client-aa00f9ba-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:57:15,161 - distributed.core - INFO - Starting established connection
-2022-08-26 13:57:15,321 - distributed.scheduler - INFO - Remove client Client-aa00f9ba-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:57:15,322 - distributed.scheduler - INFO - Remove client Client-aa00f9ba-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:57:15,322 - distributed.scheduler - INFO - Close client connection: Client-aa00f9ba-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:57:15,323 - distributed.worker - INFO - Stopping worker at ws://192.168.1.159:35721
-2022-08-26 13:57:15,324 - distributed.worker - INFO - Stopping worker at ws://192.168.1.159:39723
-2022-08-26 13:57:15,325 - distributed.scheduler - INFO - Remove worker <WorkerState 'ws://192.168.1.159:39723', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 13:57:15,325 - distributed.core - INFO - Removing comms to ws://192.168.1.159:39723
-2022-08-26 13:57:15,326 - distributed.scheduler - INFO - Remove worker <WorkerState 'ws://192.168.1.159:35721', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 13:57:15,326 - distributed.core - INFO - Removing comms to ws://192.168.1.159:35721
-2022-08-26 13:57:15,326 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 13:57:15,326 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-b6de6fd5-931e-4c39-bb7d-bcb4bcae9bfe Address ws://192.168.1.159:39723 Status: Status.closing
-2022-08-26 13:57:15,326 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-3f047130-5235-4798-90a9-d04e78f798ae Address ws://192.168.1.159:35721 Status: Status.closing
-2022-08-26 13:57:15,327 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 13:57:15,327 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/comm/tests/test_ws.py::test_collections 2022-08-26 13:57:15,469 - distributed.scheduler - INFO - State start
-2022-08-26 13:57:15,470 - distributed.scheduler - INFO - Clear task state
-2022-08-26 13:57:15,471 - distributed.scheduler - INFO -   Scheduler at:  ws://192.168.1.159:39827
-2022-08-26 13:57:15,471 - distributed.scheduler - INFO -   dashboard at:                    :44259
-2022-08-26 13:57:15,475 - distributed.worker - INFO -       Start worker at:   ws://192.168.1.159:42435
-2022-08-26 13:57:15,475 - distributed.worker - INFO -          Listening to:   ws://192.168.1.159:42435
-2022-08-26 13:57:15,475 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 13:57:15,475 - distributed.worker - INFO -          dashboard at:        192.168.1.159:46459
-2022-08-26 13:57:15,475 - distributed.worker - INFO - Waiting to connect to:   ws://192.168.1.159:39827
-2022-08-26 13:57:15,475 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:57:15,475 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 13:57:15,475 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 13:57:15,475 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-lwmxxxtf
-2022-08-26 13:57:15,475 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:57:15,476 - distributed.worker - INFO -       Start worker at:   ws://192.168.1.159:44911
-2022-08-26 13:57:15,476 - distributed.worker - INFO -          Listening to:   ws://192.168.1.159:44911
-2022-08-26 13:57:15,476 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 13:57:15,476 - distributed.worker - INFO -          dashboard at:        192.168.1.159:37903
-2022-08-26 13:57:15,476 - distributed.worker - INFO - Waiting to connect to:   ws://192.168.1.159:39827
-2022-08-26 13:57:15,476 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:57:15,476 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 13:57:15,476 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 13:57:15,476 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-slpdpsxx
-2022-08-26 13:57:15,476 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:57:15,482 - distributed.scheduler - INFO - Register worker <WorkerState 'ws://192.168.1.159:42435', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 13:57:15,482 - distributed.scheduler - INFO - Register worker <WorkerState 'ws://192.168.1.159:44911', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 13:57:15,483 - distributed.worker - INFO -         Registered to:   ws://192.168.1.159:39827
-2022-08-26 13:57:15,483 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:57:15,483 - distributed.scheduler - INFO - Starting worker compute stream, ws://192.168.1.159:42435
-2022-08-26 13:57:15,483 - distributed.core - INFO - Starting established connection
-2022-08-26 13:57:15,484 - distributed.worker - INFO -         Registered to:   ws://192.168.1.159:39827
-2022-08-26 13:57:15,484 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:57:15,484 - distributed.scheduler - INFO - Starting worker compute stream, ws://192.168.1.159:44911
-2022-08-26 13:57:15,484 - distributed.core - INFO - Starting established connection
-2022-08-26 13:57:15,484 - distributed.core - INFO - Starting established connection
-2022-08-26 13:57:15,484 - distributed.core - INFO - Starting established connection
-2022-08-26 13:57:15,501 - distributed.scheduler - INFO - Receive client connection: Client-aa34e747-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:57:15,501 - distributed.core - INFO - Starting established connection
-2022-08-26 13:57:15,852 - distributed.scheduler - INFO - Remove client Client-aa34e747-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:57:15,852 - distributed.scheduler - INFO - Remove client Client-aa34e747-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:57:15,853 - distributed.scheduler - INFO - Close client connection: Client-aa34e747-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:57:15,854 - distributed.worker - INFO - Stopping worker at ws://192.168.1.159:42435
-2022-08-26 13:57:15,854 - distributed.worker - INFO - Stopping worker at ws://192.168.1.159:44911
-2022-08-26 13:57:15,855 - distributed.scheduler - INFO - Remove worker <WorkerState 'ws://192.168.1.159:42435', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 13:57:15,855 - distributed.core - INFO - Removing comms to ws://192.168.1.159:42435
-2022-08-26 13:57:15,855 - distributed.scheduler - INFO - Remove worker <WorkerState 'ws://192.168.1.159:44911', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 13:57:15,855 - distributed.core - INFO - Removing comms to ws://192.168.1.159:44911
-2022-08-26 13:57:15,855 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 13:57:15,856 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-7be78c50-19f0-4511-9383-88ff28acffac Address ws://192.168.1.159:42435 Status: Status.closing
-2022-08-26 13:57:15,856 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-4ecdbbfa-1a2a-4baf-a7c8-fb4e102edfb2 Address ws://192.168.1.159:44911 Status: Status.closing
-2022-08-26 13:57:15,857 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 13:57:15,857 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/comm/tests/test_ws.py::test_large_transfer 2022-08-26 13:57:16,001 - distributed.scheduler - INFO - State start
-2022-08-26 13:57:16,003 - distributed.scheduler - INFO - Clear task state
-2022-08-26 13:57:16,003 - distributed.scheduler - INFO -   Scheduler at:  ws://192.168.1.159:45509
-2022-08-26 13:57:16,003 - distributed.scheduler - INFO -   dashboard at:                    :40163
-2022-08-26 13:57:16,008 - distributed.worker - INFO -       Start worker at:   ws://192.168.1.159:38699
-2022-08-26 13:57:16,008 - distributed.worker - INFO -          Listening to:   ws://192.168.1.159:38699
-2022-08-26 13:57:16,008 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 13:57:16,008 - distributed.worker - INFO -          dashboard at:        192.168.1.159:37907
-2022-08-26 13:57:16,008 - distributed.worker - INFO - Waiting to connect to:   ws://192.168.1.159:45509
-2022-08-26 13:57:16,008 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:57:16,008 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 13:57:16,008 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 13:57:16,008 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-rd_h0bp4
-2022-08-26 13:57:16,008 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:57:16,009 - distributed.worker - INFO -       Start worker at:   ws://192.168.1.159:44535
-2022-08-26 13:57:16,009 - distributed.worker - INFO -          Listening to:   ws://192.168.1.159:44535
-2022-08-26 13:57:16,009 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 13:57:16,009 - distributed.worker - INFO -          dashboard at:        192.168.1.159:36725
-2022-08-26 13:57:16,009 - distributed.worker - INFO - Waiting to connect to:   ws://192.168.1.159:45509
-2022-08-26 13:57:16,009 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:57:16,009 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 13:57:16,009 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 13:57:16,009 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-b8gkvh55
-2022-08-26 13:57:16,009 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:57:16,014 - distributed.scheduler - INFO - Register worker <WorkerState 'ws://192.168.1.159:38699', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 13:57:16,015 - distributed.scheduler - INFO - Register worker <WorkerState 'ws://192.168.1.159:44535', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 13:57:16,016 - distributed.worker - INFO -         Registered to:   ws://192.168.1.159:45509
-2022-08-26 13:57:16,016 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:57:16,016 - distributed.scheduler - INFO - Starting worker compute stream, ws://192.168.1.159:38699
-2022-08-26 13:57:16,016 - distributed.core - INFO - Starting established connection
-2022-08-26 13:57:16,016 - distributed.worker - INFO -         Registered to:   ws://192.168.1.159:45509
-2022-08-26 13:57:16,016 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:57:16,017 - distributed.scheduler - INFO - Starting worker compute stream, ws://192.168.1.159:44535
-2022-08-26 13:57:16,017 - distributed.core - INFO - Starting established connection
-2022-08-26 13:57:16,017 - distributed.core - INFO - Starting established connection
-2022-08-26 13:57:16,017 - distributed.core - INFO - Starting established connection
-2022-08-26 13:57:16,034 - distributed.scheduler - INFO - Receive client connection: Client-aa8634f4-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:57:16,034 - distributed.core - INFO - Starting established connection
-2022-08-26 13:57:16,071 - distributed.scheduler - INFO - Remove client Client-aa8634f4-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:57:16,071 - distributed.scheduler - INFO - Remove client Client-aa8634f4-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:57:16,071 - distributed.scheduler - INFO - Close client connection: Client-aa8634f4-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:57:16,072 - distributed.worker - INFO - Stopping worker at ws://192.168.1.159:38699
-2022-08-26 13:57:16,072 - distributed.worker - INFO - Stopping worker at ws://192.168.1.159:44535
-2022-08-26 13:57:16,074 - distributed.scheduler - INFO - Remove worker <WorkerState 'ws://192.168.1.159:38699', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 13:57:16,074 - distributed.core - INFO - Removing comms to ws://192.168.1.159:38699
-2022-08-26 13:57:16,074 - distributed.scheduler - INFO - Remove worker <WorkerState 'ws://192.168.1.159:44535', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 13:57:16,074 - distributed.core - INFO - Removing comms to ws://192.168.1.159:44535
-2022-08-26 13:57:16,074 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 13:57:16,075 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-84b4ab2e-707f-4843-89e5-46750e1957f7 Address ws://192.168.1.159:38699 Status: Status.closing
-2022-08-26 13:57:16,075 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-44a737d3-c607-4080-8115-647220dc4f49 Address ws://192.168.1.159:44535 Status: Status.closing
-2022-08-26 13:57:16,076 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 13:57:16,076 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/comm/tests/test_ws.py::test_large_transfer_with_no_compression 2022-08-26 13:57:16,218 - distributed.scheduler - INFO - State start
-2022-08-26 13:57:16,220 - distributed.scheduler - INFO - Clear task state
-2022-08-26 13:57:16,220 - distributed.scheduler - INFO -   Scheduler at:  ws://192.168.1.159:41079
-2022-08-26 13:57:16,220 - distributed.scheduler - INFO -   dashboard at:                     :8787
-2022-08-26 13:57:16,223 - distributed.worker - INFO -       Start worker at:   ws://192.168.1.159:33217
-2022-08-26 13:57:16,223 - distributed.worker - INFO -          Listening to:   ws://192.168.1.159:33217
-2022-08-26 13:57:16,223 - distributed.worker - INFO -          dashboard at:        192.168.1.159:38387
-2022-08-26 13:57:16,223 - distributed.worker - INFO - Waiting to connect to:   ws://192.168.1.159:41079
-2022-08-26 13:57:16,223 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:57:16,223 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 13:57:16,223 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 13:57:16,223 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-58oiq0ny
-2022-08-26 13:57:16,223 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:57:16,227 - distributed.scheduler - INFO - Register worker <WorkerState 'ws://192.168.1.159:33217', status: init, memory: 0, processing: 0>
-2022-08-26 13:57:16,227 - distributed.worker - INFO -         Registered to:   ws://192.168.1.159:41079
-2022-08-26 13:57:16,227 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:57:16,228 - distributed.scheduler - INFO - Starting worker compute stream, ws://192.168.1.159:33217
-2022-08-26 13:57:16,228 - distributed.core - INFO - Starting established connection
-2022-08-26 13:57:16,228 - distributed.core - INFO - Starting established connection
-2022-08-26 13:57:16,234 - distributed.scheduler - INFO - Receive client connection: Client-aaa4a46a-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:57:16,234 - distributed.core - INFO - Starting established connection
-2022-08-26 13:57:16,271 - distributed.scheduler - INFO - Remove client Client-aaa4a46a-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:57:16,272 - distributed.scheduler - INFO - Remove client Client-aaa4a46a-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:57:16,272 - distributed.scheduler - INFO - Close client connection: Client-aaa4a46a-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:57:16,273 - distributed.worker - INFO - Stopping worker at ws://192.168.1.159:33217
-2022-08-26 13:57:16,274 - distributed.scheduler - INFO - Remove worker <WorkerState 'ws://192.168.1.159:33217', status: closing, memory: 0, processing: 0>
-2022-08-26 13:57:16,274 - distributed.core - INFO - Removing comms to ws://192.168.1.159:33217
-2022-08-26 13:57:16,275 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 13:57:16,275 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-d7c141b4-e7ca-45f0-a6bf-75e4ac4b7af3 Address ws://192.168.1.159:33217 Status: Status.closing
-2022-08-26 13:57:16,275 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 13:57:16,275 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/comm/tests/test_ws.py::test_http_and_comm_server[True-ws://-None-8787] 2022-08-26 13:57:16,301 - distributed.scheduler - INFO - State start
-2022-08-26 13:57:16,303 - distributed.scheduler - INFO - Clear task state
-2022-08-26 13:57:16,303 - distributed.scheduler - INFO -   Scheduler at:   ws://192.168.1.159:8787
-2022-08-26 13:57:16,303 - distributed.scheduler - INFO -   dashboard at:                     :8787
-2022-08-26 13:57:16,306 - distributed.worker - INFO -       Start worker at:   ws://192.168.1.159:36109
-2022-08-26 13:57:16,306 - distributed.worker - INFO -          Listening to:   ws://192.168.1.159:36109
-2022-08-26 13:57:16,306 - distributed.worker - INFO -          dashboard at:        192.168.1.159:35049
-2022-08-26 13:57:16,306 - distributed.worker - INFO - Waiting to connect to:    ws://192.168.1.159:8787
-2022-08-26 13:57:16,306 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:57:16,306 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 13:57:16,306 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 13:57:16,306 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-6x0cxccj
-2022-08-26 13:57:16,306 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:57:16,309 - distributed.scheduler - INFO - Register worker <WorkerState 'ws://192.168.1.159:36109', status: init, memory: 0, processing: 0>
-2022-08-26 13:57:16,310 - distributed.worker - INFO -         Registered to:    ws://192.168.1.159:8787
-2022-08-26 13:57:16,310 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:57:16,310 - distributed.scheduler - INFO - Starting worker compute stream, ws://192.168.1.159:36109
-2022-08-26 13:57:16,310 - distributed.core - INFO - Starting established connection
-2022-08-26 13:57:16,312 - distributed.core - INFO - Starting established connection
-2022-08-26 13:57:16,318 - distributed.scheduler - INFO - Receive client connection: Client-aab14725-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:57:16,318 - distributed.core - INFO - Starting established connection
-2022-08-26 13:57:16,343 - distributed.scheduler - INFO - Remove client Client-aab14725-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:57:16,343 - distributed.scheduler - INFO - Remove client Client-aab14725-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:57:16,343 - distributed.scheduler - INFO - Close client connection: Client-aab14725-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:57:16,345 - distributed.worker - INFO - Stopping worker at ws://192.168.1.159:36109
-2022-08-26 13:57:16,346 - distributed.scheduler - INFO - Remove worker <WorkerState 'ws://192.168.1.159:36109', status: closing, memory: 0, processing: 0>
-2022-08-26 13:57:16,346 - distributed.core - INFO - Removing comms to ws://192.168.1.159:36109
-2022-08-26 13:57:16,346 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 13:57:16,346 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-f5ee9095-e2b9-495c-a22d-299506210ed1 Address ws://192.168.1.159:36109 Status: Status.closing
-2022-08-26 13:57:16,347 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 13:57:16,347 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/comm/tests/test_ws.py::test_http_and_comm_server[True-wss://-True-8787] 2022-08-26 13:57:16,622 - distributed.scheduler - INFO - State start
-2022-08-26 13:57:16,624 - distributed.scheduler - INFO - Clear task state
-2022-08-26 13:57:16,624 - distributed.scheduler - INFO -   Scheduler at:  wss://192.168.1.159:8787
-2022-08-26 13:57:16,624 - distributed.scheduler - INFO -   dashboard at:                     :8787
-2022-08-26 13:57:16,628 - distributed.worker - INFO -       Start worker at:  wss://192.168.1.159:44689
-2022-08-26 13:57:16,628 - distributed.worker - INFO -          Listening to:  wss://192.168.1.159:44689
-2022-08-26 13:57:16,628 - distributed.worker - INFO -          dashboard at:        192.168.1.159:44773
-2022-08-26 13:57:16,628 - distributed.worker - INFO - Waiting to connect to:   wss://192.168.1.159:8787
-2022-08-26 13:57:16,628 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:57:16,628 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 13:57:16,628 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 13:57:16,628 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-nkftrv6l
-2022-08-26 13:57:16,628 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:57:16,634 - distributed.scheduler - INFO - Register worker <WorkerState 'wss://192.168.1.159:44689', status: init, memory: 0, processing: 0>
-2022-08-26 13:57:16,635 - distributed.worker - INFO -         Registered to:   wss://192.168.1.159:8787
-2022-08-26 13:57:16,635 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:57:16,635 - distributed.scheduler - INFO - Starting worker compute stream, wss://192.168.1.159:44689
-2022-08-26 13:57:16,635 - distributed.core - INFO - Starting established connection
-2022-08-26 13:57:16,636 - distributed.core - INFO - Starting established connection
-2022-08-26 13:57:16,646 - distributed.scheduler - INFO - Receive client connection: Client-aae2d14f-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:57:16,646 - distributed.core - INFO - Starting established connection
-2022-08-26 13:57:16,671 - distributed.scheduler - INFO - Remove client Client-aae2d14f-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:57:16,671 - distributed.scheduler - INFO - Remove client Client-aae2d14f-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:57:16,672 - distributed.scheduler - INFO - Close client connection: Client-aae2d14f-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:57:16,673 - distributed.worker - INFO - Stopping worker at wss://192.168.1.159:44689
-2022-08-26 13:57:16,674 - distributed.scheduler - INFO - Remove worker <WorkerState 'wss://192.168.1.159:44689', status: closing, memory: 0, processing: 0>
-2022-08-26 13:57:16,674 - distributed.core - INFO - Removing comms to wss://192.168.1.159:44689
-2022-08-26 13:57:16,674 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 13:57:16,675 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-3bb1005b-a7cd-4695-ad20-9180208ac266 Address wss://192.168.1.159:44689 Status: Status.closing
-2022-08-26 13:57:16,675 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 13:57:16,675 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/comm/tests/test_ws.py::test_http_and_comm_server[False-ws://-None-8787] 2022-08-26 13:57:16,681 - distributed.scheduler - INFO - State start
-2022-08-26 13:57:16,683 - distributed.scheduler - INFO - Clear task state
-2022-08-26 13:57:16,683 - distributed.scheduler - INFO -   Scheduler at:   ws://192.168.1.159:8787
-2022-08-26 13:57:16,683 - distributed.scheduler - INFO -   dashboard at:                     :8787
-2022-08-26 13:57:16,685 - distributed.worker - INFO -       Start worker at:   ws://192.168.1.159:43215
-2022-08-26 13:57:16,685 - distributed.worker - INFO -          Listening to:   ws://192.168.1.159:43215
-2022-08-26 13:57:16,685 - distributed.worker - INFO -          dashboard at:        192.168.1.159:38637
-2022-08-26 13:57:16,685 - distributed.worker - INFO - Waiting to connect to:    ws://192.168.1.159:8787
-2022-08-26 13:57:16,685 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:57:16,685 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 13:57:16,686 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 13:57:16,686 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-qxj863ej
-2022-08-26 13:57:16,686 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:57:16,689 - distributed.scheduler - INFO - Register worker <WorkerState 'ws://192.168.1.159:43215', status: init, memory: 0, processing: 0>
-2022-08-26 13:57:16,689 - distributed.worker - INFO -         Registered to:    ws://192.168.1.159:8787
-2022-08-26 13:57:16,689 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:57:16,690 - distributed.scheduler - INFO - Starting worker compute stream, ws://192.168.1.159:43215
-2022-08-26 13:57:16,690 - distributed.core - INFO - Starting established connection
-2022-08-26 13:57:16,690 - distributed.core - INFO - Starting established connection
-2022-08-26 13:57:16,698 - distributed.scheduler - INFO - Receive client connection: Client-aaeb2aef-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:57:16,698 - distributed.core - INFO - Starting established connection
-2022-08-26 13:57:16,723 - distributed.scheduler - INFO - Remove client Client-aaeb2aef-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:57:16,723 - distributed.scheduler - INFO - Remove client Client-aaeb2aef-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:57:16,724 - distributed.scheduler - INFO - Close client connection: Client-aaeb2aef-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:57:16,725 - distributed.worker - INFO - Stopping worker at ws://192.168.1.159:43215
-2022-08-26 13:57:16,726 - distributed.scheduler - INFO - Remove worker <WorkerState 'ws://192.168.1.159:43215', status: closing, memory: 0, processing: 0>
-2022-08-26 13:57:16,726 - distributed.core - INFO - Removing comms to ws://192.168.1.159:43215
-2022-08-26 13:57:16,726 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 13:57:16,726 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-d1c757c1-8e0d-4089-989a-8db68806eeb7 Address ws://192.168.1.159:43215 Status: Status.closing
-2022-08-26 13:57:16,727 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 13:57:16,727 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/comm/tests/test_ws.py::test_http_and_comm_server[False-wss://-True-8787] 2022-08-26 13:57:16,836 - distributed.scheduler - INFO - State start
-2022-08-26 13:57:16,838 - distributed.scheduler - INFO - Clear task state
-2022-08-26 13:57:16,838 - distributed.scheduler - INFO -   Scheduler at:  wss://192.168.1.159:8787
-2022-08-26 13:57:16,838 - distributed.scheduler - INFO -   dashboard at:                     :8787
-2022-08-26 13:57:16,841 - distributed.worker - INFO -       Start worker at:  wss://192.168.1.159:35167
-2022-08-26 13:57:16,841 - distributed.worker - INFO -          Listening to:  wss://192.168.1.159:35167
-2022-08-26 13:57:16,842 - distributed.worker - INFO -          dashboard at:        192.168.1.159:34339
-2022-08-26 13:57:16,842 - distributed.worker - INFO - Waiting to connect to:   wss://192.168.1.159:8787
-2022-08-26 13:57:16,842 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:57:16,842 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 13:57:16,842 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 13:57:16,842 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-8tjxjpws
-2022-08-26 13:57:16,842 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:57:16,848 - distributed.scheduler - INFO - Register worker <WorkerState 'wss://192.168.1.159:35167', status: init, memory: 0, processing: 0>
-2022-08-26 13:57:16,848 - distributed.worker - INFO -         Registered to:   wss://192.168.1.159:8787
-2022-08-26 13:57:16,849 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:57:16,849 - distributed.scheduler - INFO - Starting worker compute stream, wss://192.168.1.159:35167
-2022-08-26 13:57:16,849 - distributed.core - INFO - Starting established connection
-2022-08-26 13:57:16,850 - distributed.core - INFO - Starting established connection
-2022-08-26 13:57:16,860 - distributed.scheduler - INFO - Receive client connection: Client-ab037251-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:57:16,860 - distributed.core - INFO - Starting established connection
-2022-08-26 13:57:16,884 - distributed.scheduler - INFO - Remove client Client-ab037251-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:57:16,884 - distributed.scheduler - INFO - Remove client Client-ab037251-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:57:16,885 - distributed.scheduler - INFO - Close client connection: Client-ab037251-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:57:16,886 - distributed.worker - INFO - Stopping worker at wss://192.168.1.159:35167
-2022-08-26 13:57:16,888 - distributed.scheduler - INFO - Remove worker <WorkerState 'wss://192.168.1.159:35167', status: closing, memory: 0, processing: 0>
-2022-08-26 13:57:16,888 - distributed.core - INFO - Removing comms to wss://192.168.1.159:35167
-2022-08-26 13:57:16,888 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 13:57:16,888 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-0735fc2d-01ce-4a5e-94e4-206a19105b11 Address wss://192.168.1.159:35167 Status: Status.closing
-2022-08-26 13:57:16,888 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 13:57:16,889 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/comm/tests/test_ws.py::test_http_and_comm_server[True-ws://-None-8786] 2022-08-26 13:57:16,915 - distributed.scheduler - INFO - State start
-2022-08-26 13:57:16,916 - distributed.scheduler - INFO - Clear task state
-2022-08-26 13:57:16,917 - distributed.scheduler - INFO -   Scheduler at:   ws://192.168.1.159:8786
-2022-08-26 13:57:16,917 - distributed.scheduler - INFO -   dashboard at:                     :8787
-2022-08-26 13:57:16,919 - distributed.worker - INFO -       Start worker at:   ws://192.168.1.159:44315
-2022-08-26 13:57:16,919 - distributed.worker - INFO -          Listening to:   ws://192.168.1.159:44315
-2022-08-26 13:57:16,919 - distributed.worker - INFO -          dashboard at:        192.168.1.159:37083
-2022-08-26 13:57:16,919 - distributed.worker - INFO - Waiting to connect to:    ws://192.168.1.159:8786
-2022-08-26 13:57:16,919 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:57:16,919 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 13:57:16,920 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 13:57:16,920 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-n4vtb1d0
-2022-08-26 13:57:16,920 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:57:16,923 - distributed.scheduler - INFO - Register worker <WorkerState 'ws://192.168.1.159:44315', status: init, memory: 0, processing: 0>
-2022-08-26 13:57:16,924 - distributed.worker - INFO -         Registered to:    ws://192.168.1.159:8786
-2022-08-26 13:57:16,924 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:57:16,924 - distributed.scheduler - INFO - Starting worker compute stream, ws://192.168.1.159:44315
-2022-08-26 13:57:16,924 - distributed.core - INFO - Starting established connection
-2022-08-26 13:57:16,924 - distributed.core - INFO - Starting established connection
-2022-08-26 13:57:16,932 - distributed.scheduler - INFO - Receive client connection: Client-ab0ee62b-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:57:16,932 - distributed.core - INFO - Starting established connection
-2022-08-26 13:57:16,956 - distributed.scheduler - INFO - Remove client Client-ab0ee62b-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:57:16,956 - distributed.scheduler - INFO - Remove client Client-ab0ee62b-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:57:16,957 - distributed.scheduler - INFO - Close client connection: Client-ab0ee62b-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:57:16,958 - distributed.worker - INFO - Stopping worker at ws://192.168.1.159:44315
-2022-08-26 13:57:16,959 - distributed.scheduler - INFO - Remove worker <WorkerState 'ws://192.168.1.159:44315', status: closing, memory: 0, processing: 0>
-2022-08-26 13:57:16,959 - distributed.core - INFO - Removing comms to ws://192.168.1.159:44315
-2022-08-26 13:57:16,959 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 13:57:16,959 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-cfc41594-5fa1-485f-bd5b-faefa3bdfd47 Address ws://192.168.1.159:44315 Status: Status.closing
-2022-08-26 13:57:16,960 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 13:57:16,960 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/comm/tests/test_ws.py::test_http_and_comm_server[True-wss://-True-8786] 2022-08-26 13:57:17,096 - distributed.scheduler - INFO - State start
-2022-08-26 13:57:17,098 - distributed.scheduler - INFO - Clear task state
-2022-08-26 13:57:17,099 - distributed.scheduler - INFO -   Scheduler at:  wss://192.168.1.159:8786
-2022-08-26 13:57:17,099 - distributed.scheduler - INFO -   dashboard at:                     :8787
-2022-08-26 13:57:17,102 - distributed.worker - INFO -       Start worker at:  wss://192.168.1.159:44823
-2022-08-26 13:57:17,102 - distributed.worker - INFO -          Listening to:  wss://192.168.1.159:44823
-2022-08-26 13:57:17,102 - distributed.worker - INFO -          dashboard at:        192.168.1.159:45217
-2022-08-26 13:57:17,102 - distributed.worker - INFO - Waiting to connect to:   wss://192.168.1.159:8786
-2022-08-26 13:57:17,102 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:57:17,102 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 13:57:17,102 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 13:57:17,102 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-55nnu371
-2022-08-26 13:57:17,102 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:57:17,108 - distributed.scheduler - INFO - Register worker <WorkerState 'wss://192.168.1.159:44823', status: init, memory: 0, processing: 0>
-2022-08-26 13:57:17,109 - distributed.worker - INFO -         Registered to:   wss://192.168.1.159:8786
-2022-08-26 13:57:17,109 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:57:17,110 - distributed.scheduler - INFO - Starting worker compute stream, wss://192.168.1.159:44823
-2022-08-26 13:57:17,110 - distributed.core - INFO - Starting established connection
-2022-08-26 13:57:17,110 - distributed.core - INFO - Starting established connection
-2022-08-26 13:57:17,121 - distributed.scheduler - INFO - Receive client connection: Client-ab2b31f8-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:57:17,121 - distributed.core - INFO - Starting established connection
-2022-08-26 13:57:17,145 - distributed.scheduler - INFO - Remove client Client-ab2b31f8-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:57:17,145 - distributed.scheduler - INFO - Remove client Client-ab2b31f8-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:57:17,146 - distributed.scheduler - INFO - Close client connection: Client-ab2b31f8-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:57:17,147 - distributed.worker - INFO - Stopping worker at wss://192.168.1.159:44823
-2022-08-26 13:57:17,148 - distributed.scheduler - INFO - Remove worker <WorkerState 'wss://192.168.1.159:44823', status: closing, memory: 0, processing: 0>
-2022-08-26 13:57:17,148 - distributed.core - INFO - Removing comms to wss://192.168.1.159:44823
-2022-08-26 13:57:17,149 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 13:57:17,149 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-ae44c567-c754-4de1-8856-75e82cbe545c Address wss://192.168.1.159:44823 Status: Status.closing
-2022-08-26 13:57:17,149 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 13:57:17,150 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/comm/tests/test_ws.py::test_http_and_comm_server[False-ws://-None-8786] 2022-08-26 13:57:17,155 - distributed.scheduler - INFO - State start
-2022-08-26 13:57:17,157 - distributed.scheduler - INFO - Clear task state
-2022-08-26 13:57:17,157 - distributed.scheduler - INFO -   Scheduler at:   ws://192.168.1.159:8786
-2022-08-26 13:57:17,157 - distributed.scheduler - INFO -   dashboard at:                     :8787
-2022-08-26 13:57:17,160 - distributed.worker - INFO -       Start worker at:   ws://192.168.1.159:42069
-2022-08-26 13:57:17,160 - distributed.worker - INFO -          Listening to:   ws://192.168.1.159:42069
-2022-08-26 13:57:17,160 - distributed.worker - INFO -          dashboard at:        192.168.1.159:33695
-2022-08-26 13:57:17,160 - distributed.worker - INFO - Waiting to connect to:    ws://192.168.1.159:8786
-2022-08-26 13:57:17,160 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:57:17,160 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 13:57:17,160 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 13:57:17,160 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-kdsyut11
-2022-08-26 13:57:17,160 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:57:17,163 - distributed.scheduler - INFO - Register worker <WorkerState 'ws://192.168.1.159:42069', status: init, memory: 0, processing: 0>
-2022-08-26 13:57:17,164 - distributed.worker - INFO -         Registered to:    ws://192.168.1.159:8786
-2022-08-26 13:57:17,164 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:57:17,164 - distributed.scheduler - INFO - Starting worker compute stream, ws://192.168.1.159:42069
-2022-08-26 13:57:17,164 - distributed.core - INFO - Starting established connection
-2022-08-26 13:57:17,165 - distributed.core - INFO - Starting established connection
-2022-08-26 13:57:17,172 - distributed.scheduler - INFO - Receive client connection: Client-ab3390f2-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:57:17,173 - distributed.core - INFO - Starting established connection
-2022-08-26 13:57:17,237 - distributed.scheduler - INFO - Remove client Client-ab3390f2-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:57:17,238 - distributed.scheduler - INFO - Remove client Client-ab3390f2-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:57:17,238 - distributed.scheduler - INFO - Close client connection: Client-ab3390f2-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:57:17,239 - distributed.worker - INFO - Stopping worker at ws://192.168.1.159:42069
-2022-08-26 13:57:17,241 - distributed.scheduler - INFO - Remove worker <WorkerState 'ws://192.168.1.159:42069', status: closing, memory: 0, processing: 0>
-2022-08-26 13:57:17,241 - distributed.core - INFO - Removing comms to ws://192.168.1.159:42069
-2022-08-26 13:57:17,241 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 13:57:17,241 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-7b84c1b9-7ced-4764-8cd0-5b529d20b6c2 Address ws://192.168.1.159:42069 Status: Status.closing
-2022-08-26 13:57:17,241 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 13:57:17,242 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/comm/tests/test_ws.py::test_http_and_comm_server[False-wss://-True-8786] 2022-08-26 13:57:17,408 - distributed.scheduler - INFO - State start
-2022-08-26 13:57:17,410 - distributed.scheduler - INFO - Clear task state
-2022-08-26 13:57:17,411 - distributed.scheduler - INFO -   Scheduler at:  wss://192.168.1.159:8786
-2022-08-26 13:57:17,411 - distributed.scheduler - INFO -   dashboard at:                     :8787
-2022-08-26 13:57:17,414 - distributed.worker - INFO -       Start worker at:  wss://192.168.1.159:44829
-2022-08-26 13:57:17,414 - distributed.worker - INFO -          Listening to:  wss://192.168.1.159:44829
-2022-08-26 13:57:17,414 - distributed.worker - INFO -          dashboard at:        192.168.1.159:34941
-2022-08-26 13:57:17,414 - distributed.worker - INFO - Waiting to connect to:   wss://192.168.1.159:8786
-2022-08-26 13:57:17,414 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:57:17,414 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 13:57:17,414 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 13:57:17,414 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-avy7kgvl
-2022-08-26 13:57:17,414 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:57:17,420 - distributed.scheduler - INFO - Register worker <WorkerState 'wss://192.168.1.159:44829', status: init, memory: 0, processing: 0>
-2022-08-26 13:57:17,421 - distributed.worker - INFO -         Registered to:   wss://192.168.1.159:8786
-2022-08-26 13:57:17,421 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:57:17,422 - distributed.scheduler - INFO - Starting worker compute stream, wss://192.168.1.159:44829
-2022-08-26 13:57:17,422 - distributed.core - INFO - Starting established connection
-2022-08-26 13:57:17,422 - distributed.core - INFO - Starting established connection
-2022-08-26 13:57:17,433 - distributed.scheduler - INFO - Receive client connection: Client-ab5ace12-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:57:17,433 - distributed.core - INFO - Starting established connection
-2022-08-26 13:57:17,457 - distributed.scheduler - INFO - Remove client Client-ab5ace12-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:57:17,458 - distributed.scheduler - INFO - Remove client Client-ab5ace12-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:57:17,458 - distributed.scheduler - INFO - Close client connection: Client-ab5ace12-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:57:17,459 - distributed.worker - INFO - Stopping worker at wss://192.168.1.159:44829
-2022-08-26 13:57:17,461 - distributed.scheduler - INFO - Remove worker <WorkerState 'wss://192.168.1.159:44829', status: closing, memory: 0, processing: 0>
-2022-08-26 13:57:17,461 - distributed.core - INFO - Removing comms to wss://192.168.1.159:44829
-2022-08-26 13:57:17,461 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 13:57:17,461 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-fba7e379-ea84-4576-a2f0-a1bd1805cca5 Address wss://192.168.1.159:44829 Status: Status.closing
-2022-08-26 13:57:17,462 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 13:57:17,462 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/comm/tests/test_ws.py::test_connection_made_with_extra_conn_args[ws://] 2022-08-26 13:57:17,487 - distributed.scheduler - INFO - State start
-2022-08-26 13:57:17,489 - distributed.scheduler - INFO - Clear task state
-2022-08-26 13:57:17,489 - distributed.scheduler - INFO -   Scheduler at:  ws://192.168.1.159:42087
-2022-08-26 13:57:17,489 - distributed.scheduler - INFO -   dashboard at:                    :43033
-2022-08-26 13:57:17,492 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 13:57:17,492 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/comm/tests/test_ws.py::test_connection_made_with_extra_conn_args[wss://] 2022-08-26 13:57:17,581 - distributed.scheduler - INFO - State start
-2022-08-26 13:57:17,583 - distributed.scheduler - INFO - Clear task state
-2022-08-26 13:57:17,583 - distributed.scheduler - INFO -   Scheduler at: wss://192.168.1.159:44747
-2022-08-26 13:57:17,583 - distributed.scheduler - INFO -   dashboard at:                    :34265
-2022-08-26 13:57:17,590 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 13:57:17,590 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/comm/tests/test_ws.py::test_quiet_close PASSED
-distributed/comm/tests/test_ws.py::test_ws_roundtrip PASSED
-distributed/comm/tests/test_ws.py::test_wss_roundtrip PASSED
-distributed/dashboard/tests/test_bokeh.py::test_old_import PASSED
-distributed/dashboard/tests/test_components.py::test_basic[Processing] PASSED
-distributed/dashboard/tests/test_components.py::test_profile_plot PASSED
-distributed/dashboard/tests/test_components.py::test_profile_time_plot PASSED
-distributed/dashboard/tests/test_components.py::test_profile_time_plot_disabled 2022-08-26 13:57:18,891 - tornado.application - ERROR - Exception in callback functools.partial(<function TCPServer._handle_connection.<locals>.<lambda> at 0x56403fb9c8b0>, <Task finished name='Task-18151' coro=<BaseTCPListener._handle_stream() done, defined at /home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py:588> exception=ValueError('invalid operation on non-started TCPListener')>)
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/tornado/ioloop.py", line 740, in _run_callback
-    ret = callback()
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/tornado/tcpserver.py", line 391, in <lambda>
-    gen.convert_yielded(future), lambda f: f.result()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 594, in _handle_stream
-    logger.debug("Incoming connection from %r to %r", address, self.contact_address)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 630, in contact_address
-    host, port = self.get_host_port()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 611, in get_host_port
-    self._check_started()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 586, in _check_started
-    raise ValueError("invalid operation on non-started TCPListener")
-ValueError: invalid operation on non-started TCPListener
-PASSED
-distributed/dashboard/tests/test_scheduler_bokeh.py::test_simple 2022-08-26 13:57:21,412 - distributed.diskutils - ERROR - Failed to remove '/tmp/dask-worker-space/worker-55ltr3bq/tmplhjyx4ol' (failed in <built-in function rmdir>): [Errno 39] Directory not empty: 'tmplhjyx4ol'
-2022-08-26 13:57:21,412 - distributed.diskutils - ERROR - Failed to remove '/tmp/dask-worker-space/worker-55ltr3bq' (failed in <built-in function rmdir>): [Errno 39] Directory not empty: '/tmp/dask-worker-space/worker-55ltr3bq'
-2022-08-26 13:57:21,415 - distributed.scheduler - ERROR - broadcast to tcp://127.0.0.1:37447 failed: CommClosedError: in <TCP (closed) Scheduler Broadcast local=tcp://127.0.0.1:36264 remote=tcp://127.0.0.1:37447>: Stream is closed
-2022-08-26 13:57:21,415 - distributed.scheduler - ERROR - broadcast to tcp://127.0.0.1:41267 failed: CommClosedError: in <TCP (closed) Scheduler Broadcast local=tcp://127.0.0.1:59586 remote=tcp://127.0.0.1:41267>: Stream is closed
-2022-08-26 13:57:21,416 - distributed.core - ERROR - Exception while handling op benchmark_disk
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 770, in _handle_comm
-    result = await result
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 2416, in benchmark_disk
-    return await self.loop.run_in_executor(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/_concurrent_futures_thread.py", line 65, in run
-    result = self.fn(*self.args, **self.kwargs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 3168, in benchmark_disk
-    with open(dir / random.choice(names), mode="ab") as f:
-FileNotFoundError: [Errno 2] No such file or directory: '/tmp/dask-worker-space/worker-q07un7q6/tmpb2gkw9hm/91'
-2022-08-26 13:57:21,420 - tornado.application - ERROR - Exception in callback functools.partial(<bound method IOLoop._discard_future_result of <tornado.platform.asyncio.AsyncIOMainLoop object at 0x56403d6ebac0>>, <Task finished name='Task-18357' coro=<Hardware.__init__.<locals>.f() done, defined at /home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/dashboard/components/scheduler.py:672> exception=CommClosedError('in <TCP (closed) Scheduler Broadcast local=tcp://127.0.0.1:36264 remote=tcp://127.0.0.1:37447>: Stream is closed')>)
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 225, in read
-    frames_nbytes = await stream.read_bytes(fmt_size)
-tornado.iostream.StreamClosedError: Stream is closed
-
-The above exception was the direct cause of the following exception:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/tornado/ioloop.py", line 740, in _run_callback
-    ret = callback()
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/tornado/ioloop.py", line 764, in _discard_future_result
-    future.result()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/dashboard/components/scheduler.py", line 673, in f
-    result = await self.scheduler.benchmark_hardware()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/scheduler.py", line 6481, in benchmark_hardware
-    result = await self.broadcast(msg={"op": "benchmark_disk"})
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/scheduler.py", line 5345, in broadcast
-    results = await All(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/utils.py", line 236, in All
-    result = await tasks.next()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/scheduler.py", line 5323, in send_message
-    resp = await send_recv(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 919, in send_recv
-    response = await comm.read(deserializers=deserializers)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 241, in read
-    convert_stream_closed_error(self, e)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 144, in convert_stream_closed_error
-    raise CommClosedError(f"in {obj}: {exc}") from exc
-distributed.comm.core.CommClosedError: in <TCP (closed) Scheduler Broadcast local=tcp://127.0.0.1:36264 remote=tcp://127.0.0.1:37447>: Stream is closed
-PASSED
-distributed/dashboard/tests/test_scheduler_bokeh.py::test_basic PASSED
-distributed/dashboard/tests/test_scheduler_bokeh.py::test_counters SKIPPED
-distributed/dashboard/tests/test_scheduler_bokeh.py::test_stealing_events PASSED
-distributed/dashboard/tests/test_scheduler_bokeh.py::test_events PASSED
-distributed/dashboard/tests/test_scheduler_bokeh.py::test_task_stream PASSED
-distributed/dashboard/tests/test_scheduler_bokeh.py::test_task_stream_n_rectangles PASSED
-distributed/dashboard/tests/test_scheduler_bokeh.py::test_task_stream_second_plugin PASSED
-distributed/dashboard/tests/test_scheduler_bokeh.py::test_task_stream_clear_interval PASSED
-distributed/dashboard/tests/test_scheduler_bokeh.py::test_TaskProgress PASSED
-distributed/dashboard/tests/test_scheduler_bokeh.py::test_TaskProgress_empty PASSED
-distributed/dashboard/tests/test_scheduler_bokeh.py::test_CurrentLoad PASSED
-distributed/dashboard/tests/test_scheduler_bokeh.py::test_ProcessingHistogram PASSED
-distributed/dashboard/tests/test_scheduler_bokeh.py::test_WorkersMemory PASSED
-distributed/dashboard/tests/test_scheduler_bokeh.py::test_ClusterMemory PASSED
-distributed/dashboard/tests/test_scheduler_bokeh.py::test_WorkersMemoryHistogram PASSED
-distributed/dashboard/tests/test_scheduler_bokeh.py::test_WorkerTable PASSED
-distributed/dashboard/tests/test_scheduler_bokeh.py::test_WorkerTable_custom_metrics PASSED
-distributed/dashboard/tests/test_scheduler_bokeh.py::test_WorkerTable_different_metrics PASSED
-distributed/dashboard/tests/test_scheduler_bokeh.py::test_WorkerTable_metrics_with_different_metric_2 PASSED
-distributed/dashboard/tests/test_scheduler_bokeh.py::test_WorkerTable_add_and_remove_metrics PASSED
-distributed/dashboard/tests/test_scheduler_bokeh.py::test_WorkerTable_custom_metric_overlap_with_core_metric PASSED
-distributed/dashboard/tests/test_scheduler_bokeh.py::test_WorkerTable_with_memory_limit_as_0 PASSED
-distributed/dashboard/tests/test_scheduler_bokeh.py::test_WorkerNetworkBandwidth PASSED
-distributed/dashboard/tests/test_scheduler_bokeh.py::test_WorkerNetworkBandwidth_metrics PASSED
-distributed/dashboard/tests/test_scheduler_bokeh.py::test_SystemTimeseries PASSED
-distributed/dashboard/tests/test_scheduler_bokeh.py::test_TaskGraph PASSED
-distributed/dashboard/tests/test_scheduler_bokeh.py::test_TaskGraph_clear PASSED
-distributed/dashboard/tests/test_scheduler_bokeh.py::test_TaskGraph_limit PASSED
-distributed/dashboard/tests/test_scheduler_bokeh.py::test_TaskGraph_complex PASSED
-distributed/dashboard/tests/test_scheduler_bokeh.py::test_TaskGraph_order 2022-08-26 13:57:37,394 - distributed.worker - WARNING - Compute Failed
-Key:       div-beaac0206246b34d3625d21194e03c13
-Function:  div
-args:      (1, 0)
-kwargs:    {}
-Exception: "ZeroDivisionError('division by zero')"
-
-PASSED
-distributed/dashboard/tests/test_scheduler_bokeh.py::test_TaskGroupGraph PASSED
-distributed/dashboard/tests/test_scheduler_bokeh.py::test_TaskGroupGraph_arrows PASSED
-distributed/dashboard/tests/test_scheduler_bokeh.py::test_profile_server PASSED
-distributed/dashboard/tests/test_scheduler_bokeh.py::test_profile_server_disabled SKIPPED
-distributed/dashboard/tests/test_scheduler_bokeh.py::test_root_redirect PASSED
-distributed/dashboard/tests/test_scheduler_bokeh.py::test_proxy_to_workers PASSED
-distributed/dashboard/tests/test_scheduler_bokeh.py::test_lots_of_tasks PASSED
-distributed/dashboard/tests/test_scheduler_bokeh.py::test_https_support PASSED
-distributed/dashboard/tests/test_scheduler_bokeh.py::test_memory_by_key PASSED
-distributed/dashboard/tests/test_scheduler_bokeh.py::test_aggregate_action PASSED
-distributed/dashboard/tests/test_scheduler_bokeh.py::test_compute_per_key PASSED
-distributed/dashboard/tests/test_scheduler_bokeh.py::test_prefix_bokeh PASSED
-distributed/dashboard/tests/test_scheduler_bokeh.py::test_shuffling 2022-08-26 13:57:44,555 - distributed.core - ERROR - Exception while handling op shuffle_receive
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/shuffle/shuffle_extension.py", line 347, in _get_shuffle
-    return self.shuffles[shuffle_id]
-KeyError: 'ac6e68d12e23bd62407b7372a301b19b'
-
-During handling of the above exception, another exception occurred:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 770, in _handle_comm
-    result = await result
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/shuffle/shuffle_extension.py", line 253, in shuffle_receive
-    shuffle = await self._get_shuffle(shuffle_id)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/shuffle/shuffle_extension.py", line 370, in _get_shuffle
-    shuffle = Shuffle(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/shuffle/shuffle_extension.py", line 68, in __init__
-    self.multi_file = MultiFile(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/shuffle/multi_file.py", line 69, in __init__
-    os.mkdir(self.directory)
-FileNotFoundError: [Errno 2] No such file or directory: '/tmp/dask-worker-space/worker-vukzo663/shuffle-ac6e68d12e23bd62407b7372a301b19b'
-2022-08-26 13:57:44,557 - distributed.core - ERROR - Exception while handling op shuffle_receive
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/shuffle/shuffle_extension.py", line 347, in _get_shuffle
-    return self.shuffles[shuffle_id]
-KeyError: 'ac6e68d12e23bd62407b7372a301b19b'
-
-During handling of the above exception, another exception occurred:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 770, in _handle_comm
-    result = await result
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/shuffle/shuffle_extension.py", line 253, in shuffle_receive
-    shuffle = await self._get_shuffle(shuffle_id)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/shuffle/shuffle_extension.py", line 370, in _get_shuffle
-    shuffle = Shuffle(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/shuffle/shuffle_extension.py", line 68, in __init__
-    self.multi_file = MultiFile(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/shuffle/multi_file.py", line 69, in __init__
-    os.mkdir(self.directory)
-FileNotFoundError: [Errno 2] No such file or directory: '/tmp/dask-worker-space/worker-ti0eppuv/shuffle-ac6e68d12e23bd62407b7372a301b19b'
-2022-08-26 13:57:44,713 - distributed.shuffle.multi_comm - ERROR - 
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 1302, in _connect
-    async def _connect(self, addr, timeout=None):
-asyncio.exceptions.CancelledError
-
-During handling of the above exception, another exception occurred:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/shuffle/multi_comm.py", line 182, in process
-    await self.send(address, [b"".join(shards)])
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/shuffle/shuffle_extension.py", line 80, in send
-    return await self.worker.rpc(address).shuffle_receive(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 1151, in send_recv_from_rpc
-    comm = await self.pool.connect(self.addr)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 1368, in connect
-    await connect_attempt
-asyncio.exceptions.CancelledError
-2022-08-26 13:57:44,714 - distributed.shuffle.multi_comm - ERROR - 
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 1302, in _connect
-    async def _connect(self, addr, timeout=None):
-asyncio.exceptions.CancelledError
-
-During handling of the above exception, another exception occurred:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/shuffle/multi_comm.py", line 182, in process
-    await self.send(address, [b"".join(shards)])
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/shuffle/shuffle_extension.py", line 80, in send
-    return await self.worker.rpc(address).shuffle_receive(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 1151, in send_recv_from_rpc
-    comm = await self.pool.connect(self.addr)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 1368, in connect
-    await connect_attempt
-asyncio.exceptions.CancelledError
-PASSED
-distributed/dashboard/tests/test_scheduler_bokeh.py::test_hardware PASSED
-distributed/dashboard/tests/test_worker_bokeh.py::test_routes PASSED
-distributed/dashboard/tests/test_worker_bokeh.py::test_simple PASSED
-distributed/dashboard/tests/test_worker_bokeh.py::test_services_kwargs PASSED
-distributed/dashboard/tests/test_worker_bokeh.py::test_basic[StateTable] SKIPPED
-distributed/dashboard/tests/test_worker_bokeh.py::test_basic[ExecutingTimeSeries] SKIPPED
-distributed/dashboard/tests/test_worker_bokeh.py::test_basic[CommunicatingTimeSeries] SKIPPED
-distributed/dashboard/tests/test_worker_bokeh.py::test_basic[SystemMonitor] SKIPPED
-distributed/dashboard/tests/test_worker_bokeh.py::test_counters SKIPPED
-distributed/dashboard/tests/test_worker_bokeh.py::test_CommunicatingStream PASSED
-distributed/dashboard/tests/test_worker_bokeh.py::test_prometheus PASSED
-distributed/deploy/tests/test_adaptive.py::test_adaptive_local_cluster 2022-08-26 13:57:48,589 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33307
-2022-08-26 13:57:48,589 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33307
-2022-08-26 13:57:48,589 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 13:57:48,589 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46053
-2022-08-26 13:57:48,589 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37701
-2022-08-26 13:57:48,590 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:57:48,590 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 13:57:48,590 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 13:57:48,590 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-k_9pu4nc
-2022-08-26 13:57:48,590 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:57:48,817 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37701
-2022-08-26 13:57:48,817 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:57:48,818 - distributed.core - INFO - Starting established connection
-2022-08-26 13:57:49,182 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:33307
-2022-08-26 13:57:49,182 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-0cb4f889-d542-43a7-851d-cbc5518ecc9a Address tcp://127.0.0.1:33307 Status: Status.closing
-PASSED
-distributed/deploy/tests/test_adaptive.py::test_adaptive_local_cluster_multi_workers PASSED
-distributed/deploy/tests/test_adaptive.py::test_adaptive_scale_down_override XFAIL
-distributed/deploy/tests/test_adaptive.py::test_min_max PASSED
-distributed/deploy/tests/test_adaptive.py::test_avoid_churn PASSED
-distributed/deploy/tests/test_adaptive.py::test_adapt_quickly PASSED
-distributed/deploy/tests/test_adaptive.py::test_adapt_down PASSED
-distributed/deploy/tests/test_adaptive.py::test_no_more_workers_than_tasks PASSED
-distributed/deploy/tests/test_adaptive.py::test_basic_no_loop FAILED
-distributed/deploy/tests/test_adaptive.py::test_basic_no_loop ERROR
-distributed/deploy/tests/test_adaptive.py::test_target_duration PASSED
-distributed/deploy/tests/test_adaptive.py::test_worker_keys PASSED
-distributed/deploy/tests/test_adaptive.py::test_adapt_cores_memory PASSED
-distributed/deploy/tests/test_adaptive.py::test_adaptive_config PASSED
-distributed/deploy/tests/test_adaptive.py::test_update_adaptive PASSED
-distributed/deploy/tests/test_adaptive.py::test_adaptive_no_memory_limit PASSED
-distributed/deploy/tests/test_adaptive.py::test_scale_needs_to_be_awaited PASSED
-distributed/deploy/tests/test_adaptive.py::test_adaptive_stopped PASSED
-distributed/deploy/tests/test_adaptive_core.py::test_safe_target PASSED
-distributed/deploy/tests/test_adaptive_core.py::test_scale_up PASSED
-distributed/deploy/tests/test_adaptive_core.py::test_scale_down PASSED
-distributed/deploy/tests/test_adaptive_core.py::test_interval PASSED
-distributed/deploy/tests/test_adaptive_core.py::test_adapt_oserror_safe_target 2022-08-26 13:58:04,940 - distributed.deploy.adaptive_core - ERROR - Adaptive stopping due to error
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/deploy/adaptive_core.py", line 228, in adapt
-    target = await self.safe_target()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/deploy/tests/test_adaptive_core.py", line 104, in safe_target
-    raise OSError()
-OSError
-PASSED
-distributed/deploy/tests/test_adaptive_core.py::test_adapt_oserror_scale 2022-08-26 13:58:04,954 - distributed.deploy.adaptive_core - ERROR - Error during adaptive downscaling. Ignoring.
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/deploy/adaptive_core.py", line 240, in adapt
-    await self.scale_down(**recommendations)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/deploy/tests/test_adaptive_core.py", line 130, in scale_down
-    raise OSError()
-OSError
-PASSED
-distributed/deploy/tests/test_adaptive_core.py::test_adapt_stop_del PASSED
-distributed/deploy/tests/test_cluster.py::test_eq PASSED
-distributed/deploy/tests/test_cluster.py::test_repr PASSED
-distributed/deploy/tests/test_cluster.py::test_logs_deprecated PASSED
-distributed/deploy/tests/test_cluster.py::test_deprecated_loop_properties PASSED
-distributed/deploy/tests/test_deploy_utils.py::test_default_process_thread_breakdown PASSED
-distributed/deploy/tests/test_local.py::test_simple PASSED
-distributed/deploy/tests/test_local.py::test_local_cluster_supports_blocked_handlers 2022-08-26 13:58:05,097 - distributed.core - ERROR - Exception while handling op run_function
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 768, in _handle_comm
-    result = handler(**msg)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 92, in _raise
-    raise exc
-ValueError: The 'run_function' handler has been explicitly disallowed in Scheduler, possibly due to security concerns.
-PASSED
-distributed/deploy/tests/test_local.py::test_close_twice PASSED
-distributed/deploy/tests/test_local.py::test_procs 2022-08-26 13:58:07,374 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33863
-2022-08-26 13:58:07,374 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33863
-2022-08-26 13:58:07,375 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 13:58:07,375 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42631
-2022-08-26 13:58:07,375 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34015
-2022-08-26 13:58:07,375 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:07,375 - distributed.worker - INFO -               Threads:                          3
-2022-08-26 13:58:07,375 - distributed.worker - INFO -                Memory:                  31.41 GiB
-2022-08-26 13:58:07,375 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-5vh7lrd3
-2022-08-26 13:58:07,375 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:07,383 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40949
-2022-08-26 13:58:07,383 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40949
-2022-08-26 13:58:07,383 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 13:58:07,383 - distributed.worker - INFO -          dashboard at:            127.0.0.1:44625
-2022-08-26 13:58:07,383 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34015
-2022-08-26 13:58:07,383 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:07,383 - distributed.worker - INFO -               Threads:                          3
-2022-08-26 13:58:07,383 - distributed.worker - INFO -                Memory:                  31.41 GiB
-2022-08-26 13:58:07,383 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-q42gy_t1
-2022-08-26 13:58:07,383 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:07,603 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34015
-2022-08-26 13:58:07,604 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:07,604 - distributed.core - INFO - Starting established connection
-2022-08-26 13:58:07,607 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34015
-2022-08-26 13:58:07,607 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:07,608 - distributed.core - INFO - Starting established connection
-2022-08-26 13:58:08,040 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34607
-2022-08-26 13:58:08,040 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34607
-2022-08-26 13:58:08,040 - distributed.worker - INFO -           Worker name:                          2
-2022-08-26 13:58:08,040 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36995
-2022-08-26 13:58:08,040 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34015
-2022-08-26 13:58:08,040 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:08,040 - distributed.worker - INFO -               Threads:                          3
-2022-08-26 13:58:08,040 - distributed.worker - INFO -                Memory:                  31.41 GiB
-2022-08-26 13:58:08,040 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-47cuu0yp
-2022-08-26 13:58:08,040 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:08,249 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34015
-2022-08-26 13:58:08,249 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:08,249 - distributed.core - INFO - Starting established connection
-2022-08-26 13:58:08,286 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:40949
-2022-08-26 13:58:08,287 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:33863
-2022-08-26 13:58:08,287 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:34607
-2022-08-26 13:58:08,287 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-ab008f09-8199-4967-bec7-433829c23378 Address tcp://127.0.0.1:40949 Status: Status.closing
-2022-08-26 13:58:08,288 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-6bb02c29-65ed-4f97-ab7b-b5255ddf7bcb Address tcp://127.0.0.1:34607 Status: Status.closing
-2022-08-26 13:58:08,288 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-ff9fd8d2-c76e-42b9-bb8b-78240b1c31d5 Address tcp://127.0.0.1:33863 Status: Status.closing
-PASSED
-distributed/deploy/tests/test_local.py::test_move_unserializable_data PASSED
-distributed/deploy/tests/test_local.py::test_transports_inproc PASSED
-distributed/deploy/tests/test_local.py::test_transports_tcp 2022-08-26 13:58:09,035 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39831
-2022-08-26 13:58:09,035 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39831
-2022-08-26 13:58:09,035 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 13:58:09,036 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46175
-2022-08-26 13:58:09,036 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35269
-2022-08-26 13:58:09,036 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:09,036 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 13:58:09,036 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 13:58:09,036 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-1_4ph_k2
-2022-08-26 13:58:09,036 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:09,245 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35269
-2022-08-26 13:58:09,245 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:09,245 - distributed.core - INFO - Starting established connection
-2022-08-26 13:58:09,485 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:39831
-2022-08-26 13:58:09,486 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-a832665b-ee1f-414c-adae-c1f77624ebdc Address tcp://127.0.0.1:39831 Status: Status.closing
-PASSED
-distributed/deploy/tests/test_local.py::test_transports_tcp_port PASSED
-distributed/deploy/tests/test_local.py::test_cores PASSED
-distributed/deploy/tests/test_local.py::test_submit PASSED
-distributed/deploy/tests/test_local.py::test_context_manager PASSED
-distributed/deploy/tests/test_local.py::test_no_workers_sync PASSED
-distributed/deploy/tests/test_local.py::test_Client_with_local 2022-08-26 13:58:10,370 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36301
-2022-08-26 13:58:10,370 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36301
-2022-08-26 13:58:10,370 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 13:58:10,370 - distributed.worker - INFO -          dashboard at:            127.0.0.1:44135
-2022-08-26 13:58:10,370 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:45975
-2022-08-26 13:58:10,370 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:10,370 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 13:58:10,370 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 13:58:10,370 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-8efy60z6
-2022-08-26 13:58:10,370 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:10,601 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:45975
-2022-08-26 13:58:10,601 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:10,602 - distributed.core - INFO - Starting established connection
-2022-08-26 13:58:10,625 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36301
-2022-08-26 13:58:10,626 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-5efb27a9-a4f7-426b-a5b3-474fbcd06b1b Address tcp://127.0.0.1:36301 Status: Status.closing
-PASSED
-distributed/deploy/tests/test_local.py::test_Client_solo 2022-08-26 13:58:11,226 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39893
-2022-08-26 13:58:11,226 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36113
-2022-08-26 13:58:11,226 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36113
-2022-08-26 13:58:11,226 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39893
-2022-08-26 13:58:11,226 - distributed.worker - INFO -           Worker name:                          2
-2022-08-26 13:58:11,226 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 13:58:11,226 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35403
-2022-08-26 13:58:11,226 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39481
-2022-08-26 13:58:11,226 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34165
-2022-08-26 13:58:11,226 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:11,226 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34165
-2022-08-26 13:58:11,226 - distributed.worker - INFO -               Threads:                          3
-2022-08-26 13:58:11,226 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:11,226 - distributed.worker - INFO -                Memory:                  15.71 GiB
-2022-08-26 13:58:11,226 - distributed.worker - INFO -               Threads:                          3
-2022-08-26 13:58:11,226 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-z17h23fi
-2022-08-26 13:58:11,226 - distributed.worker - INFO -                Memory:                  15.71 GiB
-2022-08-26 13:58:11,226 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:11,226 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-dxf046fd
-2022-08-26 13:58:11,226 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:11,240 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:44947
-2022-08-26 13:58:11,240 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:44947
-2022-08-26 13:58:11,240 - distributed.worker - INFO -           Worker name:                          3
-2022-08-26 13:58:11,240 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35435
-2022-08-26 13:58:11,240 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34165
-2022-08-26 13:58:11,240 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:11,240 - distributed.worker - INFO -               Threads:                          3
-2022-08-26 13:58:11,240 - distributed.worker - INFO -                Memory:                  15.71 GiB
-2022-08-26 13:58:11,240 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-cqy1k2wz
-2022-08-26 13:58:11,240 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:11,243 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40375
-2022-08-26 13:58:11,243 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40375
-2022-08-26 13:58:11,243 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 13:58:11,244 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40931
-2022-08-26 13:58:11,244 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34165
-2022-08-26 13:58:11,244 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:11,244 - distributed.worker - INFO -               Threads:                          3
-2022-08-26 13:58:11,244 - distributed.worker - INFO -                Memory:                  15.71 GiB
-2022-08-26 13:58:11,244 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-_fj0u645
-2022-08-26 13:58:11,244 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:11,455 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34165
-2022-08-26 13:58:11,455 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:11,456 - distributed.core - INFO - Starting established connection
-2022-08-26 13:58:11,456 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34165
-2022-08-26 13:58:11,456 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:11,457 - distributed.core - INFO - Starting established connection
-2022-08-26 13:58:11,466 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34165
-2022-08-26 13:58:11,466 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:11,466 - distributed.core - INFO - Starting established connection
-2022-08-26 13:58:11,475 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34165
-2022-08-26 13:58:11,475 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:11,476 - distributed.core - INFO - Starting established connection
-2022-08-26 13:58:11,508 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:39893
-2022-08-26 13:58:11,509 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-0582e43d-1daf-45ae-828f-470a41064dcb Address tcp://127.0.0.1:39893 Status: Status.closing
-2022-08-26 13:58:11,510 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:40375
-2022-08-26 13:58:11,510 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36113
-2022-08-26 13:58:11,510 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:44947
-2022-08-26 13:58:11,511 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-46d1a30a-b64d-45ed-9f23-19dc380fb74d Address tcp://127.0.0.1:40375 Status: Status.closing
-2022-08-26 13:58:11,511 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-b66b018d-7c15-481d-bcbc-79f70ac3ef3e Address tcp://127.0.0.1:36113 Status: Status.closing
-2022-08-26 13:58:11,511 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-f6c81135-ab6d-4b2f-ae70-10a96a4e9119 Address tcp://127.0.0.1:44947 Status: Status.closing
-PASSED
-distributed/deploy/tests/test_local.py::test_duplicate_clients PASSED
-distributed/deploy/tests/test_local.py::test_Client_kwargs PASSED
-distributed/deploy/tests/test_local.py::test_Client_unused_kwargs_with_cluster PASSED
-distributed/deploy/tests/test_local.py::test_Client_unused_kwargs_with_address PASSED
-distributed/deploy/tests/test_local.py::test_Client_twice 2022-08-26 13:58:13,275 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34627
-2022-08-26 13:58:13,275 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34627
-2022-08-26 13:58:13,275 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 13:58:13,275 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35099
-2022-08-26 13:58:13,275 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34413
-2022-08-26 13:58:13,275 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:13,275 - distributed.worker - INFO -               Threads:                          3
-2022-08-26 13:58:13,275 - distributed.worker - INFO -                Memory:                  15.71 GiB
-2022-08-26 13:58:13,275 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-hzli6a4j
-2022-08-26 13:58:13,275 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:13,276 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43129
-2022-08-26 13:58:13,276 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36777
-2022-08-26 13:58:13,276 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43129
-2022-08-26 13:58:13,276 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36777
-2022-08-26 13:58:13,276 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 13:58:13,276 - distributed.worker - INFO -           Worker name:                          2
-2022-08-26 13:58:13,276 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33131
-2022-08-26 13:58:13,276 - distributed.worker - INFO -          dashboard at:            127.0.0.1:44559
-2022-08-26 13:58:13,276 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34413
-2022-08-26 13:58:13,276 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34413
-2022-08-26 13:58:13,276 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:13,276 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:13,276 - distributed.worker - INFO -               Threads:                          3
-2022-08-26 13:58:13,276 - distributed.worker - INFO -               Threads:                          3
-2022-08-26 13:58:13,276 - distributed.worker - INFO -                Memory:                  15.71 GiB
-2022-08-26 13:58:13,276 - distributed.worker - INFO -                Memory:                  15.71 GiB
-2022-08-26 13:58:13,276 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-t924gr04
-2022-08-26 13:58:13,276 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-20eqfvr1
-2022-08-26 13:58:13,276 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:13,276 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:13,276 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35863
-2022-08-26 13:58:13,276 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35863
-2022-08-26 13:58:13,276 - distributed.worker - INFO -           Worker name:                          3
-2022-08-26 13:58:13,276 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36611
-2022-08-26 13:58:13,276 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34413
-2022-08-26 13:58:13,276 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:13,276 - distributed.worker - INFO -               Threads:                          3
-2022-08-26 13:58:13,276 - distributed.worker - INFO -                Memory:                  15.71 GiB
-2022-08-26 13:58:13,276 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-hntvllab
-2022-08-26 13:58:13,276 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:13,505 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34413
-2022-08-26 13:58:13,505 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:13,506 - distributed.core - INFO - Starting established connection
-2022-08-26 13:58:13,507 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34413
-2022-08-26 13:58:13,508 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:13,508 - distributed.core - INFO - Starting established connection
-2022-08-26 13:58:13,515 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34413
-2022-08-26 13:58:13,516 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:13,516 - distributed.core - INFO - Starting established connection
-2022-08-26 13:58:13,516 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34413
-2022-08-26 13:58:13,516 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:13,517 - distributed.core - INFO - Starting established connection
-2022-08-26 13:58:14,033 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36247
-2022-08-26 13:58:14,034 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36247
-2022-08-26 13:58:14,034 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 13:58:14,034 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36169
-2022-08-26 13:58:14,034 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35597
-2022-08-26 13:58:14,034 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:14,034 - distributed.worker - INFO -               Threads:                          3
-2022-08-26 13:58:14,034 - distributed.worker - INFO -                Memory:                  15.71 GiB
-2022-08-26 13:58:14,034 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-6q_9by59
-2022-08-26 13:58:14,034 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:14,034 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43841
-2022-08-26 13:58:14,034 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43841
-2022-08-26 13:58:14,034 - distributed.worker - INFO -           Worker name:                          3
-2022-08-26 13:58:14,034 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39375
-2022-08-26 13:58:14,034 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35597
-2022-08-26 13:58:14,034 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:14,034 - distributed.worker - INFO -               Threads:                          3
-2022-08-26 13:58:14,034 - distributed.worker - INFO -                Memory:                  15.71 GiB
-2022-08-26 13:58:14,034 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-qypk7c58
-2022-08-26 13:58:14,034 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:14,034 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:32947
-2022-08-26 13:58:14,034 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:32947
-2022-08-26 13:58:14,034 - distributed.worker - INFO -           Worker name:                          2
-2022-08-26 13:58:14,035 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36011
-2022-08-26 13:58:14,035 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:41347
-2022-08-26 13:58:14,035 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35597
-2022-08-26 13:58:14,035 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:41347
-2022-08-26 13:58:14,035 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:14,035 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 13:58:14,035 - distributed.worker - INFO -               Threads:                          3
-2022-08-26 13:58:14,035 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40609
-2022-08-26 13:58:14,035 - distributed.worker - INFO -                Memory:                  15.71 GiB
-2022-08-26 13:58:14,035 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35597
-2022-08-26 13:58:14,035 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-t1iexyr_
-2022-08-26 13:58:14,035 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:14,035 - distributed.worker - INFO -               Threads:                          3
-2022-08-26 13:58:14,035 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:14,035 - distributed.worker - INFO -                Memory:                  15.71 GiB
-2022-08-26 13:58:14,035 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-nttov4m_
-2022-08-26 13:58:14,035 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:14,262 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35597
-2022-08-26 13:58:14,262 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:14,262 - distributed.core - INFO - Starting established connection
-2022-08-26 13:58:14,263 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35597
-2022-08-26 13:58:14,263 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:14,264 - distributed.core - INFO - Starting established connection
-2022-08-26 13:58:14,277 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35597
-2022-08-26 13:58:14,277 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:14,278 - distributed.core - INFO - Starting established connection
-2022-08-26 13:58:14,279 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35597
-2022-08-26 13:58:14,279 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:14,280 - distributed.core - INFO - Starting established connection
-2022-08-26 13:58:14,308 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:41347
-2022-08-26 13:58:14,308 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36247
-2022-08-26 13:58:14,309 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:32947
-2022-08-26 13:58:14,309 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-92eeb6ab-e497-4ccd-ac4c-b45220f967f5 Address tcp://127.0.0.1:41347 Status: Status.closing
-2022-08-26 13:58:14,309 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:43841
-2022-08-26 13:58:14,309 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-7e480f9c-eb64-4d06-8dc4-d35c71d83eb1 Address tcp://127.0.0.1:36247 Status: Status.closing
-2022-08-26 13:58:14,309 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-4d8b1e0a-fe91-49a9-912d-eea47734b377 Address tcp://127.0.0.1:32947 Status: Status.closing
-2022-08-26 13:58:14,310 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-99916c8d-9ef6-4d84-8946-9820f2fcceaf Address tcp://127.0.0.1:43841 Status: Status.closing
-2022-08-26 13:58:14,504 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:43129
-2022-08-26 13:58:14,505 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:34627
-2022-08-26 13:58:14,505 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36777
-2022-08-26 13:58:14,505 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:35863
-2022-08-26 13:58:14,505 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-fc0c6c65-98a4-473e-9a22-27c06c93f2b5 Address tcp://127.0.0.1:43129 Status: Status.closing
-2022-08-26 13:58:14,505 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-35ee28a4-c8b8-4e19-bbbf-6f73a68cbf18 Address tcp://127.0.0.1:34627 Status: Status.closing
-2022-08-26 13:58:14,506 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-05106826-3d27-45a1-a84c-997e0c6c7d97 Address tcp://127.0.0.1:36777 Status: Status.closing
-2022-08-26 13:58:14,506 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-fc4bd5da-b748-4981-93cf-0673e66292d6 Address tcp://127.0.0.1:35863 Status: Status.closing
-PASSED
-distributed/deploy/tests/test_local.py::test_client_constructor_with_temporary_security 2022-08-26 13:58:15,353 - distributed.worker - INFO -       Start worker at:      tls://127.0.0.1:46627
-2022-08-26 13:58:15,353 - distributed.worker - INFO -       Start worker at:      tls://127.0.0.1:44273
-2022-08-26 13:58:15,353 - distributed.worker - INFO -          Listening to:      tls://127.0.0.1:46627
-2022-08-26 13:58:15,353 - distributed.worker - INFO -          Listening to:      tls://127.0.0.1:44273
-2022-08-26 13:58:15,353 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 13:58:15,353 - distributed.worker - INFO -           Worker name:                          2
-2022-08-26 13:58:15,353 - distributed.worker - INFO -          dashboard at:            127.0.0.1:32813
-2022-08-26 13:58:15,353 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33801
-2022-08-26 13:58:15,353 - distributed.worker - INFO - Waiting to connect to:      tls://127.0.0.1:36069
-2022-08-26 13:58:15,353 - distributed.worker - INFO - Waiting to connect to:      tls://127.0.0.1:36069
-2022-08-26 13:58:15,353 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:15,353 - distributed.worker - INFO -               Threads:                          3
-2022-08-26 13:58:15,353 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:15,353 - distributed.worker - INFO -                Memory:                  15.71 GiB
-2022-08-26 13:58:15,353 - distributed.worker - INFO -               Threads:                          3
-2022-08-26 13:58:15,353 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-4ij989n4
-2022-08-26 13:58:15,353 - distributed.worker - INFO -                Memory:                  15.71 GiB
-2022-08-26 13:58:15,353 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-j5jsqa_1
-2022-08-26 13:58:15,353 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:15,353 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:15,355 - distributed.worker - INFO -       Start worker at:      tls://127.0.0.1:36781
-2022-08-26 13:58:15,355 - distributed.worker - INFO -          Listening to:      tls://127.0.0.1:36781
-2022-08-26 13:58:15,356 - distributed.worker - INFO -           Worker name:                          3
-2022-08-26 13:58:15,356 - distributed.worker - INFO -          dashboard at:            127.0.0.1:38489
-2022-08-26 13:58:15,356 - distributed.worker - INFO - Waiting to connect to:      tls://127.0.0.1:36069
-2022-08-26 13:58:15,356 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:15,356 - distributed.worker - INFO -               Threads:                          3
-2022-08-26 13:58:15,356 - distributed.worker - INFO -                Memory:                  15.71 GiB
-2022-08-26 13:58:15,356 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-6v6pl33y
-2022-08-26 13:58:15,356 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:15,356 - distributed.worker - INFO -       Start worker at:      tls://127.0.0.1:39355
-2022-08-26 13:58:15,356 - distributed.worker - INFO -          Listening to:      tls://127.0.0.1:39355
-2022-08-26 13:58:15,356 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 13:58:15,356 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36387
-2022-08-26 13:58:15,357 - distributed.worker - INFO - Waiting to connect to:      tls://127.0.0.1:36069
-2022-08-26 13:58:15,357 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:15,357 - distributed.worker - INFO -               Threads:                          3
-2022-08-26 13:58:15,357 - distributed.worker - INFO -                Memory:                  15.71 GiB
-2022-08-26 13:58:15,357 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-2xxrgc8m
-2022-08-26 13:58:15,357 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:15,607 - distributed.worker - INFO -         Registered to:      tls://127.0.0.1:36069
-2022-08-26 13:58:15,607 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:15,608 - distributed.core - INFO - Starting established connection
-2022-08-26 13:58:15,609 - distributed.worker - INFO -         Registered to:      tls://127.0.0.1:36069
-2022-08-26 13:58:15,610 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:15,610 - distributed.core - INFO - Starting established connection
-2022-08-26 13:58:15,612 - distributed.worker - INFO -         Registered to:      tls://127.0.0.1:36069
-2022-08-26 13:58:15,612 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:15,613 - distributed.core - INFO - Starting established connection
-2022-08-26 13:58:15,619 - distributed.worker - INFO -         Registered to:      tls://127.0.0.1:36069
-2022-08-26 13:58:15,619 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:15,620 - distributed.core - INFO - Starting established connection
-2022-08-26 13:58:15,692 - distributed.worker - INFO - Stopping worker at tls://127.0.0.1:39355
-2022-08-26 13:58:15,693 - distributed.worker - INFO - Stopping worker at tls://127.0.0.1:44273
-2022-08-26 13:58:15,693 - distributed.worker - INFO - Stopping worker at tls://127.0.0.1:46627
-2022-08-26 13:58:15,693 - distributed.worker - INFO - Stopping worker at tls://127.0.0.1:36781
-2022-08-26 13:58:15,693 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-4dddb511-bd13-4c44-aa04-c9c143cb6d9a Address tls://127.0.0.1:39355 Status: Status.closing
-2022-08-26 13:58:15,693 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-535052a4-32f7-4f79-b3b9-0cc2b12cb5b6 Address tls://127.0.0.1:44273 Status: Status.closing
-2022-08-26 13:58:15,694 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-b226101b-ea62-4d2b-99ea-3f65fe8dfea4 Address tls://127.0.0.1:36781 Status: Status.closing
-2022-08-26 13:58:15,694 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-c194a042-3f78-4b43-b83f-ee2f1b6963f1 Address tls://127.0.0.1:46627 Status: Status.closing
-PASSED
-distributed/deploy/tests/test_local.py::test_defaults 2022-08-26 13:58:16,385 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42519
-2022-08-26 13:58:16,386 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42519
-2022-08-26 13:58:16,386 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 13:58:16,386 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41035
-2022-08-26 13:58:16,386 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39687
-2022-08-26 13:58:16,386 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:16,386 - distributed.worker - INFO -               Threads:                          3
-2022-08-26 13:58:16,386 - distributed.worker - INFO -                Memory:                  15.71 GiB
-2022-08-26 13:58:16,386 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-9ynv5oib
-2022-08-26 13:58:16,386 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:16,387 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:46153
-2022-08-26 13:58:16,387 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:46153
-2022-08-26 13:58:16,387 - distributed.worker - INFO -           Worker name:                          3
-2022-08-26 13:58:16,387 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42135
-2022-08-26 13:58:16,387 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34193
-2022-08-26 13:58:16,387 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42135
-2022-08-26 13:58:16,387 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39687
-2022-08-26 13:58:16,387 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:16,387 - distributed.worker - INFO -           Worker name:                          2
-2022-08-26 13:58:16,387 - distributed.worker - INFO -               Threads:                          3
-2022-08-26 13:58:16,387 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35657
-2022-08-26 13:58:16,387 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39687
-2022-08-26 13:58:16,387 - distributed.worker - INFO -                Memory:                  15.71 GiB
-2022-08-26 13:58:16,387 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:16,387 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-6jge2g37
-2022-08-26 13:58:16,387 - distributed.worker - INFO -               Threads:                          3
-2022-08-26 13:58:16,387 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:16,387 - distributed.worker - INFO -                Memory:                  15.71 GiB
-2022-08-26 13:58:16,387 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-4m62wmpe
-2022-08-26 13:58:16,387 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:16,409 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43335
-2022-08-26 13:58:16,409 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43335
-2022-08-26 13:58:16,409 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 13:58:16,409 - distributed.worker - INFO -          dashboard at:            127.0.0.1:37423
-2022-08-26 13:58:16,409 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39687
-2022-08-26 13:58:16,409 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:16,409 - distributed.worker - INFO -               Threads:                          3
-2022-08-26 13:58:16,409 - distributed.worker - INFO -                Memory:                  15.71 GiB
-2022-08-26 13:58:16,409 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-uekcrvvz
-2022-08-26 13:58:16,409 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:16,618 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39687
-2022-08-26 13:58:16,618 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:16,619 - distributed.core - INFO - Starting established connection
-2022-08-26 13:58:16,624 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39687
-2022-08-26 13:58:16,624 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:16,625 - distributed.core - INFO - Starting established connection
-2022-08-26 13:58:16,637 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39687
-2022-08-26 13:58:16,637 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:16,638 - distributed.core - INFO - Starting established connection
-2022-08-26 13:58:16,649 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39687
-2022-08-26 13:58:16,650 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:16,650 - distributed.core - INFO - Starting established connection
-2022-08-26 13:58:16,655 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42519
-2022-08-26 13:58:16,656 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:43335
-2022-08-26 13:58:16,656 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-fb6ff14a-ced8-4c44-a323-0c155d6047ad Address tcp://127.0.0.1:42519 Status: Status.closing
-2022-08-26 13:58:16,656 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42135
-2022-08-26 13:58:16,657 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-1427813c-9997-4871-b8a5-8fff610d4f27 Address tcp://127.0.0.1:43335 Status: Status.closing
-2022-08-26 13:58:16,657 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:46153
-2022-08-26 13:58:16,657 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-4849ded5-889d-4852-867a-3025df9b53e7 Address tcp://127.0.0.1:46153 Status: Status.closing
-2022-08-26 13:58:16,657 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-b598c41e-8bad-47dc-9c05-ed026e018f75 Address tcp://127.0.0.1:42135 Status: Status.closing
-PASSED
-distributed/deploy/tests/test_local.py::test_defaults_2 PASSED
-distributed/deploy/tests/test_local.py::test_defaults_3 2022-08-26 13:58:17,366 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:44303
-2022-08-26 13:58:17,366 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:44303
-2022-08-26 13:58:17,366 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 13:58:17,366 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40607
-2022-08-26 13:58:17,366 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39421
-2022-08-26 13:58:17,366 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:17,366 - distributed.worker - INFO -               Threads:                          6
-2022-08-26 13:58:17,366 - distributed.worker - INFO -                Memory:                  31.41 GiB
-2022-08-26 13:58:17,366 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-c5y133jr
-2022-08-26 13:58:17,366 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:17,379 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43927
-2022-08-26 13:58:17,380 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43927
-2022-08-26 13:58:17,380 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 13:58:17,380 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34379
-2022-08-26 13:58:17,380 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39421
-2022-08-26 13:58:17,380 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:17,380 - distributed.worker - INFO -               Threads:                          6
-2022-08-26 13:58:17,380 - distributed.worker - INFO -                Memory:                  31.41 GiB
-2022-08-26 13:58:17,380 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-zt5u0alj
-2022-08-26 13:58:17,380 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:17,596 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39421
-2022-08-26 13:58:17,596 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:17,597 - distributed.core - INFO - Starting established connection
-2022-08-26 13:58:17,601 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39421
-2022-08-26 13:58:17,601 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:17,602 - distributed.core - INFO - Starting established connection
-2022-08-26 13:58:17,642 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:44303
-2022-08-26 13:58:17,642 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:43927
-2022-08-26 13:58:17,643 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-1f705a24-3ddc-434a-a985-7b35f3783e61 Address tcp://127.0.0.1:44303 Status: Status.closing
-2022-08-26 13:58:17,643 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-89bb532a-89f9-4d8d-90ee-fcfbb3a40480 Address tcp://127.0.0.1:43927 Status: Status.closing
-PASSED
-distributed/deploy/tests/test_local.py::test_defaults_4 2022-08-26 13:58:18,257 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:41123
-2022-08-26 13:58:18,258 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:41123
-2022-08-26 13:58:18,258 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 13:58:18,258 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40719
-2022-08-26 13:58:18,258 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:40115
-2022-08-26 13:58:18,258 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:18,258 - distributed.worker - INFO -               Threads:                         24
-2022-08-26 13:58:18,258 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 13:58:18,258 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-3bd_wp03
-2022-08-26 13:58:18,258 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:18,473 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:40115
-2022-08-26 13:58:18,474 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:18,474 - distributed.core - INFO - Starting established connection
-2022-08-26 13:58:18,483 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:41123
-2022-08-26 13:58:18,484 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-62fd2f06-66d9-453d-9f7f-b3c274ecd252 Address tcp://127.0.0.1:41123 Status: Status.closing
-PASSED
-distributed/deploy/tests/test_local.py::test_defaults_5 2022-08-26 13:58:19,881 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:46467
-2022-08-26 13:58:19,881 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:46467
-2022-08-26 13:58:19,881 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 13:58:19,881 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43301
-2022-08-26 13:58:19,881 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35839
-2022-08-26 13:58:19,881 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:19,881 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 13:58:19,881 - distributed.worker - INFO -                Memory:                   2.62 GiB
-2022-08-26 13:58:19,881 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-_h85bsui
-2022-08-26 13:58:19,881 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:19,891 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34403
-2022-08-26 13:58:19,891 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34403
-2022-08-26 13:58:19,891 - distributed.worker - INFO -           Worker name:                         22
-2022-08-26 13:58:19,891 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40851
-2022-08-26 13:58:19,892 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35839
-2022-08-26 13:58:19,892 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:19,892 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 13:58:19,892 - distributed.worker - INFO -                Memory:                   2.62 GiB
-2022-08-26 13:58:19,892 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-x0tnb2if
-2022-08-26 13:58:19,892 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:19,913 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33961
-2022-08-26 13:58:19,913 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33961
-2022-08-26 13:58:19,913 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 13:58:19,913 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36623
-2022-08-26 13:58:19,913 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35839
-2022-08-26 13:58:19,913 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:19,913 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 13:58:19,913 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37407
-2022-08-26 13:58:19,913 - distributed.worker - INFO -                Memory:                   2.62 GiB
-2022-08-26 13:58:19,913 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37407
-2022-08-26 13:58:19,913 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-qp_gkyh6
-2022-08-26 13:58:19,913 - distributed.worker - INFO -           Worker name:                          2
-2022-08-26 13:58:19,913 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40429
-2022-08-26 13:58:19,913 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:19,913 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35839
-2022-08-26 13:58:19,913 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:19,913 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 13:58:19,913 - distributed.worker - INFO -                Memory:                   2.62 GiB
-2022-08-26 13:58:19,914 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-4soco3im
-2022-08-26 13:58:19,914 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:19,926 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34235
-2022-08-26 13:58:19,926 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34235
-2022-08-26 13:58:19,926 - distributed.worker - INFO -           Worker name:                          6
-2022-08-26 13:58:19,926 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46335
-2022-08-26 13:58:19,926 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35839
-2022-08-26 13:58:19,926 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:19,926 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 13:58:19,926 - distributed.worker - INFO -                Memory:                   2.62 GiB
-2022-08-26 13:58:19,926 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-kntlqe0t
-2022-08-26 13:58:19,926 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:19,965 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35321
-2022-08-26 13:58:19,965 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35321
-2022-08-26 13:58:19,965 - distributed.worker - INFO -           Worker name:                          5
-2022-08-26 13:58:19,965 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34939
-2022-08-26 13:58:19,965 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35839
-2022-08-26 13:58:19,965 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:19,965 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 13:58:19,965 - distributed.worker - INFO -                Memory:                   2.62 GiB
-2022-08-26 13:58:19,965 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-uhdvy67r
-2022-08-26 13:58:19,965 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:20,004 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42669
-2022-08-26 13:58:20,004 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42669
-2022-08-26 13:58:20,005 - distributed.worker - INFO -           Worker name:                         19
-2022-08-26 13:58:20,005 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33581
-2022-08-26 13:58:20,005 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35839
-2022-08-26 13:58:20,005 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:20,005 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 13:58:20,005 - distributed.worker - INFO -                Memory:                   2.62 GiB
-2022-08-26 13:58:20,005 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ix_v3dq5
-2022-08-26 13:58:20,005 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:20,014 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43885
-2022-08-26 13:58:20,014 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43885
-2022-08-26 13:58:20,014 - distributed.worker - INFO -           Worker name:                         16
-2022-08-26 13:58:20,014 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46835
-2022-08-26 13:58:20,014 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35839
-2022-08-26 13:58:20,014 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:20,014 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 13:58:20,014 - distributed.worker - INFO -                Memory:                   2.62 GiB
-2022-08-26 13:58:20,015 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-7dfnl57t
-2022-08-26 13:58:20,015 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:20,133 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39783
-2022-08-26 13:58:20,133 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39783
-2022-08-26 13:58:20,133 - distributed.worker - INFO -           Worker name:                         20
-2022-08-26 13:58:20,133 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45013
-2022-08-26 13:58:20,133 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35839
-2022-08-26 13:58:20,133 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:20,133 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 13:58:20,133 - distributed.worker - INFO -                Memory:                   2.62 GiB
-2022-08-26 13:58:20,133 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-5e3xdiyc
-2022-08-26 13:58:20,133 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:20,138 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43535
-2022-08-26 13:58:20,155 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43535
-2022-08-26 13:58:20,155 - distributed.worker - INFO -           Worker name:                         15
-2022-08-26 13:58:20,156 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43427
-2022-08-26 13:58:20,156 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35839
-2022-08-26 13:58:20,156 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:20,156 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 13:58:20,156 - distributed.worker - INFO -                Memory:                   2.62 GiB
-2022-08-26 13:58:20,156 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-h6nbgbyb
-2022-08-26 13:58:20,156 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:20,168 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43121
-2022-08-26 13:58:20,168 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43121
-2022-08-26 13:58:20,168 - distributed.worker - INFO -           Worker name:                         12
-2022-08-26 13:58:20,169 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33137
-2022-08-26 13:58:20,169 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35839
-2022-08-26 13:58:20,169 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:20,169 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 13:58:20,169 - distributed.worker - INFO -                Memory:                   2.62 GiB
-2022-08-26 13:58:20,169 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-d17s_5hq
-2022-08-26 13:58:20,169 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:20,203 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35281
-2022-08-26 13:58:20,203 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35281
-2022-08-26 13:58:20,203 - distributed.worker - INFO -           Worker name:                          7
-2022-08-26 13:58:20,203 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40661
-2022-08-26 13:58:20,203 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35839
-2022-08-26 13:58:20,203 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:20,203 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 13:58:20,203 - distributed.worker - INFO -                Memory:                   2.62 GiB
-2022-08-26 13:58:20,203 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-c9c4d1yu
-2022-08-26 13:58:20,203 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:20,261 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42209
-2022-08-26 13:58:20,261 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42209
-2022-08-26 13:58:20,261 - distributed.worker - INFO -           Worker name:                         10
-2022-08-26 13:58:20,261 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39237
-2022-08-26 13:58:20,261 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35839
-2022-08-26 13:58:20,261 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:20,261 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 13:58:20,261 - distributed.worker - INFO -                Memory:                   2.62 GiB
-2022-08-26 13:58:20,261 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-sywm0uu4
-2022-08-26 13:58:20,261 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:20,337 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:44697
-2022-08-26 13:58:20,337 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38147
-2022-08-26 13:58:20,337 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38147
-2022-08-26 13:58:20,337 - distributed.worker - INFO -           Worker name:                         18
-2022-08-26 13:58:20,337 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33593
-2022-08-26 13:58:20,337 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:44697
-2022-08-26 13:58:20,337 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35839
-2022-08-26 13:58:20,337 - distributed.worker - INFO -           Worker name:                         13
-2022-08-26 13:58:20,337 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:20,337 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39987
-2022-08-26 13:58:20,337 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 13:58:20,337 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35839
-2022-08-26 13:58:20,337 - distributed.worker - INFO -                Memory:                   2.62 GiB
-2022-08-26 13:58:20,337 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:20,338 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-owq2sket
-2022-08-26 13:58:20,338 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 13:58:20,338 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:20,338 - distributed.worker - INFO -                Memory:                   2.62 GiB
-2022-08-26 13:58:20,338 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-6yd2ou3q
-2022-08-26 13:58:20,338 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:20,366 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43213
-2022-08-26 13:58:20,366 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43213
-2022-08-26 13:58:20,366 - distributed.worker - INFO -           Worker name:                         23
-2022-08-26 13:58:20,366 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46519
-2022-08-26 13:58:20,366 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35839
-2022-08-26 13:58:20,366 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:20,366 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 13:58:20,366 - distributed.worker - INFO -                Memory:                   2.62 GiB
-2022-08-26 13:58:20,366 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-vvgs9v1u
-2022-08-26 13:58:20,367 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:20,369 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:46351
-2022-08-26 13:58:20,369 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:46351
-2022-08-26 13:58:20,369 - distributed.worker - INFO -           Worker name:                          8
-2022-08-26 13:58:20,369 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34317
-2022-08-26 13:58:20,369 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35839
-2022-08-26 13:58:20,369 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:20,369 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 13:58:20,369 - distributed.worker - INFO -                Memory:                   2.62 GiB
-2022-08-26 13:58:20,369 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-w11n45ey
-2022-08-26 13:58:20,369 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:20,393 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:46113
-2022-08-26 13:58:20,393 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:46113
-2022-08-26 13:58:20,393 - distributed.worker - INFO -           Worker name:                          9
-2022-08-26 13:58:20,393 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36033
-2022-08-26 13:58:20,393 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35839
-2022-08-26 13:58:20,393 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:20,393 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 13:58:20,393 - distributed.worker - INFO -                Memory:                   2.62 GiB
-2022-08-26 13:58:20,393 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-wr36nzzg
-2022-08-26 13:58:20,393 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:20,425 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42091
-2022-08-26 13:58:20,425 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42091
-2022-08-26 13:58:20,425 - distributed.worker - INFO -           Worker name:                         17
-2022-08-26 13:58:20,425 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40203
-2022-08-26 13:58:20,425 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35839
-2022-08-26 13:58:20,425 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:20,426 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 13:58:20,426 - distributed.worker - INFO -                Memory:                   2.62 GiB
-2022-08-26 13:58:20,426 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-eozpltut
-2022-08-26 13:58:20,426 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:20,452 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:41415
-2022-08-26 13:58:20,453 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:41415
-2022-08-26 13:58:20,453 - distributed.worker - INFO -           Worker name:                          3
-2022-08-26 13:58:20,453 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41629
-2022-08-26 13:58:20,453 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35839
-2022-08-26 13:58:20,453 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:20,453 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 13:58:20,454 - distributed.worker - INFO -                Memory:                   2.62 GiB
-2022-08-26 13:58:20,454 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-w3zn4r0k
-2022-08-26 13:58:20,454 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:20,485 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45677
-2022-08-26 13:58:20,485 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45677
-2022-08-26 13:58:20,485 - distributed.worker - INFO -           Worker name:                          4
-2022-08-26 13:58:20,485 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34481
-2022-08-26 13:58:20,485 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35839
-2022-08-26 13:58:20,485 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:20,485 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 13:58:20,485 - distributed.worker - INFO -                Memory:                   2.62 GiB
-2022-08-26 13:58:20,485 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-zg0p92gg
-2022-08-26 13:58:20,485 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:20,509 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38039
-2022-08-26 13:58:20,509 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38039
-2022-08-26 13:58:20,509 - distributed.worker - INFO -           Worker name:                         21
-2022-08-26 13:58:20,509 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45621
-2022-08-26 13:58:20,509 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35839
-2022-08-26 13:58:20,509 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:20,509 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 13:58:20,509 - distributed.worker - INFO -                Memory:                   2.62 GiB
-2022-08-26 13:58:20,509 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-umfy83kc
-2022-08-26 13:58:20,509 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:20,565 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35265
-2022-08-26 13:58:20,565 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35265
-2022-08-26 13:58:20,565 - distributed.worker - INFO -           Worker name:                         11
-2022-08-26 13:58:20,565 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35599
-2022-08-26 13:58:20,565 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35839
-2022-08-26 13:58:20,565 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:20,565 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 13:58:20,565 - distributed.worker - INFO -                Memory:                   2.62 GiB
-2022-08-26 13:58:20,565 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-80aljmdo
-2022-08-26 13:58:20,565 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:20,612 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35839
-2022-08-26 13:58:20,612 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:20,613 - distributed.core - INFO - Starting established connection
-2022-08-26 13:58:20,636 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35839
-2022-08-26 13:58:20,636 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:20,647 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35839
-2022-08-26 13:58:20,648 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:20,649 - distributed.core - INFO - Starting established connection
-2022-08-26 13:58:20,654 - distributed.core - INFO - Starting established connection
-2022-08-26 13:58:20,690 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35839
-2022-08-26 13:58:20,690 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:20,692 - distributed.core - INFO - Starting established connection
-2022-08-26 13:58:20,694 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35839
-2022-08-26 13:58:20,694 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:20,695 - distributed.core - INFO - Starting established connection
-2022-08-26 13:58:20,710 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34769
-2022-08-26 13:58:20,710 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34769
-2022-08-26 13:58:20,710 - distributed.worker - INFO -           Worker name:                         14
-2022-08-26 13:58:20,710 - distributed.worker - INFO -          dashboard at:            127.0.0.1:44329
-2022-08-26 13:58:20,710 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35839
-2022-08-26 13:58:20,710 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:20,710 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 13:58:20,710 - distributed.worker - INFO -                Memory:                   2.62 GiB
-2022-08-26 13:58:20,711 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-gemok3ht
-2022-08-26 13:58:20,711 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:20,716 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35839
-2022-08-26 13:58:20,717 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:20,718 - distributed.core - INFO - Starting established connection
-2022-08-26 13:58:20,742 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35839
-2022-08-26 13:58:20,742 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:20,767 - distributed.core - INFO - Starting established connection
-2022-08-26 13:58:20,778 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35839
-2022-08-26 13:58:20,778 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:20,790 - distributed.core - INFO - Starting established connection
-2022-08-26 13:58:20,811 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35839
-2022-08-26 13:58:20,811 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:20,812 - distributed.core - INFO - Starting established connection
-2022-08-26 13:58:20,878 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35839
-2022-08-26 13:58:20,878 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:20,878 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35839
-2022-08-26 13:58:20,879 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:20,879 - distributed.core - INFO - Starting established connection
-2022-08-26 13:58:20,888 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35839
-2022-08-26 13:58:20,888 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:20,890 - distributed.core - INFO - Starting established connection
-2022-08-26 13:58:20,902 - distributed.core - INFO - Starting established connection
-2022-08-26 13:58:20,915 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35839
-2022-08-26 13:58:20,915 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:20,916 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35839
-2022-08-26 13:58:20,917 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:20,918 - distributed.core - INFO - Starting established connection
-2022-08-26 13:58:20,922 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35839
-2022-08-26 13:58:20,923 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:20,923 - distributed.core - INFO - Starting established connection
-2022-08-26 13:58:20,924 - distributed.core - INFO - Starting established connection
-2022-08-26 13:58:20,928 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35839
-2022-08-26 13:58:20,928 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:20,929 - distributed.core - INFO - Starting established connection
-2022-08-26 13:58:20,968 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35839
-2022-08-26 13:58:20,969 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:20,971 - distributed.core - INFO - Starting established connection
-2022-08-26 13:58:20,978 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35839
-2022-08-26 13:58:20,978 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:20,979 - distributed.core - INFO - Starting established connection
-2022-08-26 13:58:20,994 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35839
-2022-08-26 13:58:20,995 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:20,996 - distributed.core - INFO - Starting established connection
-2022-08-26 13:58:21,002 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35839
-2022-08-26 13:58:21,003 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:21,003 - distributed.core - INFO - Starting established connection
-2022-08-26 13:58:21,029 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35839
-2022-08-26 13:58:21,029 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:21,030 - distributed.core - INFO - Starting established connection
-2022-08-26 13:58:21,030 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35839
-2022-08-26 13:58:21,031 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:21,031 - distributed.core - INFO - Starting established connection
-2022-08-26 13:58:21,035 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35839
-2022-08-26 13:58:21,036 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:21,036 - distributed.core - INFO - Starting established connection
-2022-08-26 13:58:21,064 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35839
-2022-08-26 13:58:21,064 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:21,065 - distributed.core - INFO - Starting established connection
-2022-08-26 13:58:21,078 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:33961
-2022-08-26 13:58:21,078 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:46467
-2022-08-26 13:58:21,078 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:37407
-2022-08-26 13:58:21,079 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-a9a31591-932f-4ac2-b525-7d88ed876a40 Address tcp://127.0.0.1:33961 Status: Status.closing
-2022-08-26 13:58:21,079 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-3d7c60c0-468b-4b09-b828-995934cdf9f1 Address tcp://127.0.0.1:46467 Status: Status.closing
-2022-08-26 13:58:21,079 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:41415
-2022-08-26 13:58:21,079 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:45677
-2022-08-26 13:58:21,079 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-31694bbd-f979-439e-8075-bd2c83a04cf0 Address tcp://127.0.0.1:37407 Status: Status.closing
-2022-08-26 13:58:21,080 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-477d4048-522b-4ea3-b1e9-26f1c8b75fc5 Address tcp://127.0.0.1:41415 Status: Status.closing
-2022-08-26 13:58:21,080 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:35321
-2022-08-26 13:58:21,080 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-d340e476-52cf-4049-8e9a-f4e25082126c Address tcp://127.0.0.1:45677 Status: Status.closing
-2022-08-26 13:58:21,081 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:34235
-2022-08-26 13:58:21,081 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-1dcf0f6b-d019-47e6-89a2-f6a2c7b80bfa Address tcp://127.0.0.1:35321 Status: Status.closing
-2022-08-26 13:58:21,082 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:35281
-2022-08-26 13:58:21,082 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:46351
-2022-08-26 13:58:21,082 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-0f9530a0-95e2-4317-8d22-12f5e3ba1074 Address tcp://127.0.0.1:34235 Status: Status.closing
-2022-08-26 13:58:21,083 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:46113
-2022-08-26 13:58:21,083 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-fc6c3807-771b-4b0b-b401-8436bcb76d0e Address tcp://127.0.0.1:35281 Status: Status.closing
-2022-08-26 13:58:21,083 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-dd8f5df7-a02a-4a7b-9a81-6f3f289bc586 Address tcp://127.0.0.1:46351 Status: Status.closing
-2022-08-26 13:58:21,084 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-fc246aac-91ea-437b-bb3e-bb2838b1c68a Address tcp://127.0.0.1:46113 Status: Status.closing
-2022-08-26 13:58:21,091 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42209
-2022-08-26 13:58:21,092 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:35265
-2022-08-26 13:58:21,092 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:43121
-2022-08-26 13:58:21,093 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-51656137-db4a-4bab-b7f0-7a7935ca91cc Address tcp://127.0.0.1:42209 Status: Status.closing
-2022-08-26 13:58:21,093 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-a5f81258-6d56-4960-aa88-5292d2bb5adb Address tcp://127.0.0.1:35265 Status: Status.closing
-2022-08-26 13:58:21,094 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-d86d9c2d-c52b-43ed-9417-f5d37bafe378 Address tcp://127.0.0.1:43121 Status: Status.closing
-2022-08-26 13:58:21,095 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:44697
-2022-08-26 13:58:21,096 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:34769
-2022-08-26 13:58:21,097 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:43535
-2022-08-26 13:58:21,097 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-2b95aeee-46aa-4644-b612-de7ca8ff24d7 Address tcp://127.0.0.1:44697 Status: Status.closing
-2022-08-26 13:58:21,097 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-5b224d89-a382-419d-bb03-032cba049605 Address tcp://127.0.0.1:34769 Status: Status.closing
-2022-08-26 13:58:21,098 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-6741bbfe-32b7-465a-9bbc-4d9b735508a4 Address tcp://127.0.0.1:43535 Status: Status.closing
-2022-08-26 13:58:21,115 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:43885
-2022-08-26 13:58:21,116 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-ef65b7d4-ea3c-4279-bf8d-c39be8a6145e Address tcp://127.0.0.1:43885 Status: Status.closing
-2022-08-26 13:58:21,131 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42091
-2022-08-26 13:58:21,132 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-dca0723d-737d-420d-a5b5-c66d624df36b Address tcp://127.0.0.1:42091 Status: Status.closing
-2022-08-26 13:58:21,150 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:38147
-2022-08-26 13:58:21,152 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-f0ee2fe9-c5b9-43e2-8b30-8bef794b5170 Address tcp://127.0.0.1:38147 Status: Status.closing
-2022-08-26 13:58:21,166 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42669
-2022-08-26 13:58:21,168 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-7a9309d5-c711-454e-8294-8d27831ef9af Address tcp://127.0.0.1:42669 Status: Status.closing
-2022-08-26 13:58:21,202 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:39783
-2022-08-26 13:58:21,203 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-a2731398-79ea-46c3-b119-c238ccb8109a Address tcp://127.0.0.1:39783 Status: Status.closing
-2022-08-26 13:58:21,205 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:38039
-2022-08-26 13:58:21,206 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-77c2fdcc-98f1-4264-9788-dfea59afc697 Address tcp://127.0.0.1:38039 Status: Status.closing
-2022-08-26 13:58:21,226 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:34403
-2022-08-26 13:58:21,227 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-28bdea6a-67fc-47ef-8e49-ba0933740b85 Address tcp://127.0.0.1:34403 Status: Status.closing
-2022-08-26 13:58:21,236 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:43213
-2022-08-26 13:58:21,237 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-54083f51-661d-4353-a472-38f022fb5b6d Address tcp://127.0.0.1:43213 Status: Status.closing
-PASSED
-distributed/deploy/tests/test_local.py::test_defaults_6 2022-08-26 13:58:22,317 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40017
-2022-08-26 13:58:22,317 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40017
-2022-08-26 13:58:22,317 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 13:58:22,317 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45607
-2022-08-26 13:58:22,317 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:38411
-2022-08-26 13:58:22,317 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:22,317 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 13:58:22,317 - distributed.worker - INFO -                Memory:                  20.94 GiB
-2022-08-26 13:58:22,317 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-_wqqt4jw
-2022-08-26 13:58:22,318 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:22,319 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:41521
-2022-08-26 13:58:22,319 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:41521
-2022-08-26 13:58:22,319 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 13:58:22,319 - distributed.worker - INFO -          dashboard at:            127.0.0.1:32895
-2022-08-26 13:58:22,319 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:38411
-2022-08-26 13:58:22,319 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:22,319 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 13:58:22,319 - distributed.worker - INFO -                Memory:                  20.94 GiB
-2022-08-26 13:58:22,319 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-3mfxuq84
-2022-08-26 13:58:22,319 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:22,339 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33843
-2022-08-26 13:58:22,339 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33843
-2022-08-26 13:58:22,339 - distributed.worker - INFO -           Worker name:                          2
-2022-08-26 13:58:22,339 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33749
-2022-08-26 13:58:22,339 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:38411
-2022-08-26 13:58:22,339 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:22,339 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 13:58:22,339 - distributed.worker - INFO -                Memory:                  20.94 GiB
-2022-08-26 13:58:22,339 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-wfmiexeg
-2022-08-26 13:58:22,339 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:22,561 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:38411
-2022-08-26 13:58:22,561 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:22,562 - distributed.core - INFO - Starting established connection
-2022-08-26 13:58:22,562 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:38411
-2022-08-26 13:58:22,562 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:22,563 - distributed.core - INFO - Starting established connection
-2022-08-26 13:58:22,563 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:38411
-2022-08-26 13:58:22,564 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:22,564 - distributed.core - INFO - Starting established connection
-2022-08-26 13:58:22,590 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:41521
-2022-08-26 13:58:22,591 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:40017
-2022-08-26 13:58:22,591 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-cc8e1813-347a-4b8f-96c9-304061338551 Address tcp://127.0.0.1:41521 Status: Status.closing
-2022-08-26 13:58:22,591 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:33843
-2022-08-26 13:58:22,592 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-905f2f33-8a51-4259-815d-38260577cf97 Address tcp://127.0.0.1:40017 Status: Status.closing
-2022-08-26 13:58:22,592 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-e23e1ce9-d5c4-4f32-bc8c-3da00d87b355 Address tcp://127.0.0.1:33843 Status: Status.closing
-PASSED
-distributed/deploy/tests/test_local.py::test_worker_params PASSED
-distributed/deploy/tests/test_local.py::test_memory_limit_none PASSED
-distributed/deploy/tests/test_local.py::test_cleanup 2022-08-26 13:58:23,309 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34089
-2022-08-26 13:58:23,309 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34089
-2022-08-26 13:58:23,309 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 13:58:23,309 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46361
-2022-08-26 13:58:23,309 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:40113
-2022-08-26 13:58:23,309 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:23,309 - distributed.worker - INFO -               Threads:                          6
-2022-08-26 13:58:23,309 - distributed.worker - INFO -                Memory:                  31.41 GiB
-2022-08-26 13:58:23,309 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-bu0ltxnt
-2022-08-26 13:58:23,309 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:23,331 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38257
-2022-08-26 13:58:23,331 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38257
-2022-08-26 13:58:23,332 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 13:58:23,332 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39673
-2022-08-26 13:58:23,332 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:40113
-2022-08-26 13:58:23,332 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:23,332 - distributed.worker - INFO -               Threads:                          6
-2022-08-26 13:58:23,332 - distributed.worker - INFO -                Memory:                  31.41 GiB
-2022-08-26 13:58:23,332 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-v7p9pz9n
-2022-08-26 13:58:23,332 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:23,549 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:40113
-2022-08-26 13:58:23,549 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:23,550 - distributed.core - INFO - Starting established connection
-2022-08-26 13:58:23,558 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:40113
-2022-08-26 13:58:23,559 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:23,559 - distributed.core - INFO - Starting established connection
-2022-08-26 13:58:23,593 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:38257
-2022-08-26 13:58:23,594 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:34089
-2022-08-26 13:58:23,594 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-a5885064-6cc4-469c-a974-8ea795b75498 Address tcp://127.0.0.1:38257 Status: Status.closing
-2022-08-26 13:58:23,595 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-47cd7bb5-bf9a-47f5-95a8-165d43ec0632 Address tcp://127.0.0.1:34089 Status: Status.closing
-2022-08-26 13:58:24,245 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:46341
-2022-08-26 13:58:24,245 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:46341
-2022-08-26 13:58:24,245 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 13:58:24,245 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36195
-2022-08-26 13:58:24,245 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:40113
-2022-08-26 13:58:24,245 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:24,245 - distributed.worker - INFO -               Threads:                          6
-2022-08-26 13:58:24,245 - distributed.worker - INFO -                Memory:                  31.41 GiB
-2022-08-26 13:58:24,245 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-d96g029u
-2022-08-26 13:58:24,245 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:24,261 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39691
-2022-08-26 13:58:24,261 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39691
-2022-08-26 13:58:24,261 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 13:58:24,261 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42149
-2022-08-26 13:58:24,261 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:40113
-2022-08-26 13:58:24,261 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:24,261 - distributed.worker - INFO -               Threads:                          6
-2022-08-26 13:58:24,261 - distributed.worker - INFO -                Memory:                  31.41 GiB
-2022-08-26 13:58:24,261 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-8jujys5r
-2022-08-26 13:58:24,261 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:24,468 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:40113
-2022-08-26 13:58:24,469 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:24,469 - distributed.core - INFO - Starting established connection
-2022-08-26 13:58:24,481 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:40113
-2022-08-26 13:58:24,482 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:24,482 - distributed.core - INFO - Starting established connection
-2022-08-26 13:58:24,525 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:46341
-2022-08-26 13:58:24,525 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:39691
-2022-08-26 13:58:24,526 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-b7727f70-bd9f-4f1f-87c2-bfe5b76dd548 Address tcp://127.0.0.1:46341 Status: Status.closing
-2022-08-26 13:58:24,526 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-a390a2d1-f7a4-4836-b8d6-95b0104e642c Address tcp://127.0.0.1:39691 Status: Status.closing
-PASSED
-distributed/deploy/tests/test_local.py::test_repeated PASSED
-distributed/deploy/tests/test_local.py::test_bokeh[True] PASSED
-distributed/deploy/tests/test_local.py::test_bokeh[False] PASSED
-distributed/deploy/tests/test_local.py::test_blocks_until_full PASSED
-distributed/deploy/tests/test_local.py::test_scale_up_and_down PASSED
-distributed/deploy/tests/test_local.py::test_silent_startup PASSED
-distributed/deploy/tests/test_local.py::test_only_local_access PASSED
-distributed/deploy/tests/test_local.py::test_remote_access PASSED
-distributed/deploy/tests/test_local.py::test_memory[None] PASSED
-distributed/deploy/tests/test_local.py::test_memory[3] PASSED
-distributed/deploy/tests/test_local.py::test_memory_nanny[None] 2022-08-26 13:58:29,866 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38201
-2022-08-26 13:58:29,866 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:44991
-2022-08-26 13:58:29,866 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38201
-2022-08-26 13:58:29,866 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:44991
-2022-08-26 13:58:29,866 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 13:58:29,866 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 13:58:29,866 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36645
-2022-08-26 13:58:29,866 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35113
-2022-08-26 13:58:29,866 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:41773
-2022-08-26 13:58:29,866 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:41773
-2022-08-26 13:58:29,866 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:29,866 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:29,866 - distributed.worker - INFO -               Threads:                          3
-2022-08-26 13:58:29,866 - distributed.worker - INFO -               Threads:                          3
-2022-08-26 13:58:29,866 - distributed.worker - INFO -                Memory:                  15.71 GiB
-2022-08-26 13:58:29,866 - distributed.worker - INFO -                Memory:                  15.71 GiB
-2022-08-26 13:58:29,866 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-f850gi2u
-2022-08-26 13:58:29,866 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-_o30r5or
-2022-08-26 13:58:29,866 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:29,866 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:29,868 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33101
-2022-08-26 13:58:29,868 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33101
-2022-08-26 13:58:29,868 - distributed.worker - INFO -           Worker name:                          2
-2022-08-26 13:58:29,868 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40713
-2022-08-26 13:58:29,868 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:41773
-2022-08-26 13:58:29,868 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:29,868 - distributed.worker - INFO -               Threads:                          3
-2022-08-26 13:58:29,869 - distributed.worker - INFO -                Memory:                  15.71 GiB
-2022-08-26 13:58:29,869 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-2_yi8ylb
-2022-08-26 13:58:29,869 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:29,883 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33121
-2022-08-26 13:58:29,883 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33121
-2022-08-26 13:58:29,883 - distributed.worker - INFO -           Worker name:                          3
-2022-08-26 13:58:29,883 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34939
-2022-08-26 13:58:29,883 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:41773
-2022-08-26 13:58:29,883 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:29,883 - distributed.worker - INFO -               Threads:                          3
-2022-08-26 13:58:29,883 - distributed.worker - INFO -                Memory:                  15.71 GiB
-2022-08-26 13:58:29,884 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-sfuntdgq
-2022-08-26 13:58:29,884 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:30,115 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:41773
-2022-08-26 13:58:30,115 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:30,116 - distributed.core - INFO - Starting established connection
-2022-08-26 13:58:30,119 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:41773
-2022-08-26 13:58:30,119 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:30,120 - distributed.core - INFO - Starting established connection
-2022-08-26 13:58:30,120 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:41773
-2022-08-26 13:58:30,120 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:30,121 - distributed.core - INFO - Starting established connection
-2022-08-26 13:58:30,121 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:41773
-2022-08-26 13:58:30,121 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:30,122 - distributed.core - INFO - Starting established connection
-2022-08-26 13:58:30,187 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:44991
-2022-08-26 13:58:30,188 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:38201
-2022-08-26 13:58:30,188 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:33101
-2022-08-26 13:58:30,188 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-338b47f3-385f-441d-87b6-3737c1248b92 Address tcp://127.0.0.1:44991 Status: Status.closing
-2022-08-26 13:58:30,189 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:33121
-2022-08-26 13:58:30,189 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-bb0ea6d0-b8ac-414e-af9a-046146d950c2 Address tcp://127.0.0.1:38201 Status: Status.closing
-2022-08-26 13:58:30,189 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-7f0e817a-cfcf-4eb4-ab0b-20e942d9167d Address tcp://127.0.0.1:33101 Status: Status.closing
-2022-08-26 13:58:30,190 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-49655a70-6d91-4873-9329-329de6ee3afe Address tcp://127.0.0.1:33121 Status: Status.closing
-PASSED
-distributed/deploy/tests/test_local.py::test_memory_nanny[3] 2022-08-26 13:58:30,888 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34861
-2022-08-26 13:58:30,888 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34861
-2022-08-26 13:58:30,888 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 13:58:30,888 - distributed.worker - INFO -          dashboard at:            127.0.0.1:32825
-2022-08-26 13:58:30,888 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39673
-2022-08-26 13:58:30,888 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:30,888 - distributed.worker - INFO -               Threads:                          4
-2022-08-26 13:58:30,889 - distributed.worker - INFO -                Memory:                  20.94 GiB
-2022-08-26 13:58:30,889 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-e7zxvb7f
-2022-08-26 13:58:30,889 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:30,900 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40829
-2022-08-26 13:58:30,900 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40829
-2022-08-26 13:58:30,900 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 13:58:30,900 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35569
-2022-08-26 13:58:30,900 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39673
-2022-08-26 13:58:30,900 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:30,900 - distributed.worker - INFO -               Threads:                          4
-2022-08-26 13:58:30,900 - distributed.worker - INFO -                Memory:                  20.94 GiB
-2022-08-26 13:58:30,900 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-egj31drj
-2022-08-26 13:58:30,901 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:30,903 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40645
-2022-08-26 13:58:30,903 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40645
-2022-08-26 13:58:30,903 - distributed.worker - INFO -           Worker name:                          2
-2022-08-26 13:58:30,903 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41639
-2022-08-26 13:58:30,903 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39673
-2022-08-26 13:58:30,903 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:30,903 - distributed.worker - INFO -               Threads:                          4
-2022-08-26 13:58:30,903 - distributed.worker - INFO -                Memory:                  20.94 GiB
-2022-08-26 13:58:30,903 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-mxmd6eey
-2022-08-26 13:58:30,903 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:31,133 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39673
-2022-08-26 13:58:31,133 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:31,134 - distributed.core - INFO - Starting established connection
-2022-08-26 13:58:31,138 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39673
-2022-08-26 13:58:31,138 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:31,139 - distributed.core - INFO - Starting established connection
-2022-08-26 13:58:31,145 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39673
-2022-08-26 13:58:31,145 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:31,146 - distributed.core - INFO - Starting established connection
-2022-08-26 13:58:31,172 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:40829
-2022-08-26 13:58:31,172 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:34861
-2022-08-26 13:58:31,173 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:40645
-2022-08-26 13:58:31,173 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-b9c9d604-a8c3-448e-94f2-61140a171e89 Address tcp://127.0.0.1:40829 Status: Status.closing
-2022-08-26 13:58:31,173 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-ece01143-5283-4a69-b85c-69682d99ee3c Address tcp://127.0.0.1:34861 Status: Status.closing
-2022-08-26 13:58:31,173 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-e1fbd9a3-35e7-4dc3-8976-743c441e4f04 Address tcp://127.0.0.1:40645 Status: Status.closing
-PASSED
-distributed/deploy/tests/test_local.py::test_death_timeout_raises PASSED
-distributed/deploy/tests/test_local.py::test_bokeh_kwargs PASSED
-distributed/deploy/tests/test_local.py::test_io_loop_periodic_callbacks 2022-08-26 13:58:32,223 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:46457
-2022-08-26 13:58:32,223 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:46457
-2022-08-26 13:58:32,223 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34189
-2022-08-26 13:58:32,223 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 13:58:32,223 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34189
-2022-08-26 13:58:32,223 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43301
-2022-08-26 13:58:32,223 - distributed.worker - INFO -           Worker name:                          2
-2022-08-26 13:58:32,223 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43025
-2022-08-26 13:58:32,223 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43607
-2022-08-26 13:58:32,223 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:32,223 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43025
-2022-08-26 13:58:32,223 - distributed.worker - INFO -               Threads:                          3
-2022-08-26 13:58:32,223 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:32,223 - distributed.worker - INFO -                Memory:                  15.71 GiB
-2022-08-26 13:58:32,223 - distributed.worker - INFO -               Threads:                          3
-2022-08-26 13:58:32,223 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-tur9avk9
-2022-08-26 13:58:32,223 - distributed.worker - INFO -                Memory:                  15.71 GiB
-2022-08-26 13:58:32,223 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:32,223 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ezc6nwxn
-2022-08-26 13:58:32,223 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:32,233 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:41137
-2022-08-26 13:58:32,234 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:41137
-2022-08-26 13:58:32,234 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 13:58:32,234 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33409
-2022-08-26 13:58:32,234 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43025
-2022-08-26 13:58:32,234 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:32,234 - distributed.worker - INFO -               Threads:                          3
-2022-08-26 13:58:32,234 - distributed.worker - INFO -                Memory:                  15.71 GiB
-2022-08-26 13:58:32,234 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-q9bw6upp
-2022-08-26 13:58:32,234 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:32,242 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33657
-2022-08-26 13:58:32,242 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33657
-2022-08-26 13:58:32,242 - distributed.worker - INFO -           Worker name:                          3
-2022-08-26 13:58:32,242 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36359
-2022-08-26 13:58:32,242 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43025
-2022-08-26 13:58:32,242 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:32,242 - distributed.worker - INFO -               Threads:                          3
-2022-08-26 13:58:32,242 - distributed.worker - INFO -                Memory:                  15.71 GiB
-2022-08-26 13:58:32,242 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-7a0gcnpl
-2022-08-26 13:58:32,242 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:32,470 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43025
-2022-08-26 13:58:32,470 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:32,471 - distributed.core - INFO - Starting established connection
-2022-08-26 13:58:32,478 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43025
-2022-08-26 13:58:32,478 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:32,479 - distributed.core - INFO - Starting established connection
-2022-08-26 13:58:32,479 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43025
-2022-08-26 13:58:32,480 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:32,480 - distributed.core - INFO - Starting established connection
-2022-08-26 13:58:32,495 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43025
-2022-08-26 13:58:32,495 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:32,496 - distributed.core - INFO - Starting established connection
-2022-08-26 13:58:32,530 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:41137
-2022-08-26 13:58:32,531 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:46457
-2022-08-26 13:58:32,531 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:34189
-2022-08-26 13:58:32,531 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-4c2b7bd8-ac8c-4113-9af8-8aee083e5445 Address tcp://127.0.0.1:41137 Status: Status.closing
-2022-08-26 13:58:32,532 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:33657
-2022-08-26 13:58:32,532 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-53832697-05c4-4150-9b05-8b9a76de5d75 Address tcp://127.0.0.1:46457 Status: Status.closing
-2022-08-26 13:58:32,532 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-467f3ffb-75c0-4f0c-90db-2be512c12d6e Address tcp://127.0.0.1:34189 Status: Status.closing
-2022-08-26 13:58:32,532 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-e3d963da-cee6-4788-a837-a9bbd53350b1 Address tcp://127.0.0.1:33657 Status: Status.closing
-PASSED
-distributed/deploy/tests/test_local.py::test_logging PASSED
-distributed/deploy/tests/test_local.py::test_ipywidgets Tab(children=(HTML(value='<div class="jp-RenderedHTMLCommon jp-RenderedHTML jp-mod-trusted jp-OutputArea-output">\n    <div style="width: 24px; height: 24px; background-color: #e1e1e1; border: 3px solid #9D9D9D; border-radius: 5px; position: absolute;">\n    </div>\n    <div style="margin-left: 48px;">\n        <h3 style="margin-bottom: 0px; margin-top: 0px;">LocalCluster</h3>\n        <p style="color: #9D9D9D; margin-bottom: 0px;">cc832c83</p>\n        <table style="width: 100%; text-align: left;">\n            <tr>\n                <td style="text-align: left;">\n                    <strong>Dashboard:</strong> <a href="http://192.168.1.159:43049/status"; target="_blank">http://192.168.1.159:43049/status</a>\n                </td>\n                <td style="text-align: left;">\n                    <strong>Workers:</strong> 0\n                </td>\n            </tr>\n            <tr>\n                <td style="text-al
ign: left;">\n                    <strong>Total threads:</strong> 0\n                </td>\n                <td style="text-align: left;">\n                    <strong>Total memory:</strong> 0 B\n                </td>\n            </tr>\n            \n            <tr>\n    <td style="text-align: left;"><strong>Status:</strong> running</td>\n    <td style="text-align: left;"><strong>Using processes:</strong> False</td>\n</tr>\n\n            \n        </table>\n\n        <details>\n            <summary style="margin-bottom: 20px;">\n                <h3 style="display: inline;">Scheduler Info</h3>\n            </summary>\n\n            <div style="">\n    <div>\n        <div style="width: 24px; height: 24px; background-color: #FFF7E5; border: 3px solid #FF6132; border-radius: 5px; position: absolute;"> </div>\n        <div style="margin-left: 48px;">\n            <h3 style="margin-bottom: 0px;">Scheduler</h3>\n            <p style="color: #9D9D9D; margin-bottom: 0px;">Scheduler-ebbecc1a
 -831a-42c5-a007-0730cbed7662</p>\n            <table style="width: 100%; text-align: left;">\n                <tr>\n                    <td style="text-align: left;">\n                        <strong>Comm:</strong> inproc://192.168.1.159/518557/613\n                    </td>\n                    <td style="text-align: left;">\n                        <strong>Workers:</strong> 0\n                    </td>\n                </tr>\n                <tr>\n                    <td style="text-align: left;">\n                        <strong>Dashboard:</strong> <a href="http://192.168.1.159:43049/status"; target="_blank">http://192.168.1.159:43049/status</a>\n                    </td>\n                    <td style="text-align: left;">\n                        <strong>Total threads:</strong> 0\n                    </td>\n                </tr>\n                <tr>\n                    <td style="text-align: left;">\n                        <strong>Started:</strong> Just now\n                  
   </td>\n                    <td style="text-align: left;">\n                        <strong>Total memory:</strong> 0 B\n                    </td>\n                </tr>\n            </table>\n        </div>\n    </div>\n\n    <details style="margin-left: 48px;">\n        <summary style="margin-bottom: 20px;">\n            <h3 style="display: inline;">Workers</h3>\n        </summary>\n\n        \n\n    </details>\n</div>\n\n        </details>\n    </div>\n</div>'), VBox(children=(HTML(value='\n        <table>\n            <tr><td style="text-align: left;">Scaling mode: Manual</td></tr>\n            <tr><td style="text-align: left;">Workers: 0</td></tr>\n        </table>\n        '), Accordion(children=(HBox(children=(IntText(value=0, description='Workers', layout=Layout(width='150px')), Button(description='Scale', layout=Layout(width='150px'), style=ButtonStyle()))), HBox(children=(IntText(value=0, description='Minimum', layout=Layout(width='150px')), IntText(value=0, description='M
 aximum', layout=Layout(width='150px')), Button(description='Adapt', layout=Layout(width='150px'), style=ButtonStyle())))), layout=Layout(min_width='500px'), selected_index=None, _titles={'0': 'Manual Scaling', '1': 'Adaptive Scaling'})))), _titles={'0': 'Status', '1': 'Scaling'})
-PASSED
-distributed/deploy/tests/test_local.py::test_ipywidgets_loop Tab(children=(HTML(value='<div class="jp-RenderedHTMLCommon jp-RenderedHTML jp-mod-trusted jp-OutputArea-output">\n    <div style="width: 24px; height: 24px; background-color: #e1e1e1; border: 3px solid #9D9D9D; border-radius: 5px; position: absolute;">\n    </div>\n    <div style="margin-left: 48px;">\n        <h3 style="margin-bottom: 0px; margin-top: 0px;">LocalCluster</h3>\n        <p style="color: #9D9D9D; margin-bottom: 0px;">93ba01d3</p>\n        <table style="width: 100%; text-align: left;">\n            <tr>\n                <td style="text-align: left;">\n                    <strong>Dashboard:</strong> <a href="http://192.168.1.159:36203/status"; target="_blank">http://192.168.1.159:36203/status</a>\n                </td>\n                <td style="text-align: left;">\n                    <strong>Workers:</strong> 0\n                </td>\n            </tr>\n            <tr>\n                <td style="te
xt-align: left;">\n                    <strong>Total threads:</strong> 0\n                </td>\n                <td style="text-align: left;">\n                    <strong>Total memory:</strong> 0 B\n                </td>\n            </tr>\n            \n            <tr>\n    <td style="text-align: left;"><strong>Status:</strong> running</td>\n    <td style="text-align: left;"><strong>Using processes:</strong> False</td>\n</tr>\n\n            \n        </table>\n\n        <details>\n            <summary style="margin-bottom: 20px;">\n                <h3 style="display: inline;">Scheduler Info</h3>\n            </summary>\n\n            <div style="">\n    <div>\n        <div style="width: 24px; height: 24px; background-color: #FFF7E5; border: 3px solid #FF6132; border-radius: 5px; position: absolute;"> </div>\n        <div style="margin-left: 48px;">\n            <h3 style="margin-bottom: 0px;">Scheduler</h3>\n            <p style="color: #9D9D9D; margin-bottom: 0px;">Scheduler-c9b
 ee3af-0c14-41dd-83a8-9165c39c8ce0</p>\n            <table style="width: 100%; text-align: left;">\n                <tr>\n                    <td style="text-align: left;">\n                        <strong>Comm:</strong> inproc://192.168.1.159/518557/616\n                    </td>\n                    <td style="text-align: left;">\n                        <strong>Workers:</strong> 0\n                    </td>\n                </tr>\n                <tr>\n                    <td style="text-align: left;">\n                        <strong>Dashboard:</strong> <a href="http://192.168.1.159:36203/status"; target="_blank">http://192.168.1.159:36203/status</a>\n                    </td>\n                    <td style="text-align: left;">\n                        <strong>Total threads:</strong> 0\n                    </td>\n                </tr>\n                <tr>\n                    <td style="text-align: left;">\n                        <strong>Started:</strong> Just now\n             
        </td>\n                    <td style="text-align: left;">\n                        <strong>Total memory:</strong> 0 B\n                    </td>\n                </tr>\n            </table>\n        </div>\n    </div>\n\n    <details style="margin-left: 48px;">\n        <summary style="margin-bottom: 20px;">\n            <h3 style="display: inline;">Workers</h3>\n        </summary>\n\n        \n\n    </details>\n</div>\n\n        </details>\n    </div>\n</div>'), VBox(children=(HTML(value='\n        <table>\n            <tr><td style="text-align: left;">Scaling mode: Manual</td></tr>\n            <tr><td style="text-align: left;">Workers: 0</td></tr>\n        </table>\n        '), Accordion(children=(HBox(children=(IntText(value=0, description='Workers', layout=Layout(width='150px')), Button(description='Scale', layout=Layout(width='150px'), style=ButtonStyle()))), HBox(children=(IntText(value=0, description='Minimum', layout=Layout(width='150px')), IntText(value=0, descripti
 on='Maximum', layout=Layout(width='150px')), Button(description='Adapt', layout=Layout(width='150px'), style=ButtonStyle())))), layout=Layout(min_width='500px'), selected_index=None, _titles={'0': 'Manual Scaling', '1': 'Adaptive Scaling'})))), _titles={'0': 'Status', '1': 'Scaling'})
-PASSED
-distributed/deploy/tests/test_local.py::test_no_ipywidgets PASSED
-distributed/deploy/tests/test_local.py::test_scale PASSED
-distributed/deploy/tests/test_local.py::test_adapt PASSED
-distributed/deploy/tests/test_local.py::test_adapt_then_manual FAILED
-distributed/deploy/tests/test_local.py::test_local_tls[True] PASSED
-distributed/deploy/tests/test_local.py::test_local_tls[False] 2022-08-26 13:58:37,563 - distributed.scheduler - INFO - State start
-2022-08-26 13:58:37,565 - distributed.scheduler - INFO - Clear task state
-2022-08-26 13:58:37,565 - distributed.scheduler - INFO -   Scheduler at: tls://192.168.1.159:36987
-2022-08-26 13:58:37,565 - distributed.scheduler - INFO -   dashboard at:                    :44337
-2022-08-26 13:58:40,577 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 13:58:40,578 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/deploy/tests/test_local.py::test_scale_retires_workers 2022-08-26 13:58:40,611 - distributed.scheduler - INFO - State start
-2022-08-26 13:58:40,613 - distributed.scheduler - INFO - Clear task state
-2022-08-26 13:58:40,613 - distributed.scheduler - INFO -   Scheduler at: inproc://192.168.1.159/518557/666
-2022-08-26 13:58:40,613 - distributed.scheduler - INFO -   dashboard at:           localhost:42229
-2022-08-26 13:58:40,616 - distributed.scheduler - INFO - Receive client connection: Client-dcf10d8d-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:58:40,616 - distributed.core - INFO - Starting established connection
-2022-08-26 13:58:40,620 - distributed.worker - INFO -       Start worker at: inproc://192.168.1.159/518557/671
-2022-08-26 13:58:40,620 - distributed.worker - INFO -          Listening to:        inproc192.168.1.159
-2022-08-26 13:58:40,620 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 13:58:40,620 - distributed.worker - INFO -          dashboard at:        192.168.1.159:44687
-2022-08-26 13:58:40,621 - distributed.worker - INFO - Waiting to connect to: inproc://192.168.1.159/518557/666
-2022-08-26 13:58:40,621 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:40,621 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 13:58:40,621 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 13:58:40,621 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ewccpgx8
-2022-08-26 13:58:40,621 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:40,621 - distributed.worker - INFO -       Start worker at: inproc://192.168.1.159/518557/672
-2022-08-26 13:58:40,621 - distributed.worker - INFO -          Listening to:        inproc192.168.1.159
-2022-08-26 13:58:40,621 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 13:58:40,621 - distributed.worker - INFO -          dashboard at:        192.168.1.159:34513
-2022-08-26 13:58:40,621 - distributed.worker - INFO - Waiting to connect to: inproc://192.168.1.159/518557/666
-2022-08-26 13:58:40,622 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:40,622 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 13:58:40,622 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 13:58:40,622 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-00tajnml
-2022-08-26 13:58:40,622 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:40,623 - distributed.scheduler - INFO - Register worker <WorkerState 'inproc://192.168.1.159/518557/671', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 13:58:40,623 - distributed.scheduler - INFO - Starting worker compute stream, inproc://192.168.1.159/518557/671
-2022-08-26 13:58:40,623 - distributed.core - INFO - Starting established connection
-2022-08-26 13:58:40,624 - distributed.scheduler - INFO - Register worker <WorkerState 'inproc://192.168.1.159/518557/672', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 13:58:40,624 - distributed.scheduler - INFO - Starting worker compute stream, inproc://192.168.1.159/518557/672
-2022-08-26 13:58:40,624 - distributed.core - INFO - Starting established connection
-2022-08-26 13:58:40,624 - distributed.worker - INFO -         Registered to: inproc://192.168.1.159/518557/666
-2022-08-26 13:58:40,624 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:40,624 - distributed.worker - INFO -         Registered to: inproc://192.168.1.159/518557/666
-2022-08-26 13:58:40,624 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:40,625 - distributed.core - INFO - Starting established connection
-2022-08-26 13:58:40,625 - distributed.core - INFO - Starting established connection
-2022-08-26 13:58:40,627 - distributed.worker - INFO - Stopping worker at inproc://192.168.1.159/518557/672
-2022-08-26 13:58:40,628 - distributed.scheduler - INFO - Remove worker <WorkerState 'inproc://192.168.1.159/518557/672', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 13:58:40,628 - distributed.core - INFO - Removing comms to inproc://192.168.1.159/518557/672
-2022-08-26 13:58:40,628 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-747e33c4-765d-47d3-8175-3f81deabefdf Address inproc://192.168.1.159/518557/672 Status: Status.closing
-2022-08-26 13:58:40,638 - distributed.scheduler - INFO - Remove client Client-dcf10d8d-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:58:40,638 - distributed.scheduler - INFO - Remove client Client-dcf10d8d-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:58:40,638 - distributed.scheduler - INFO - Close client connection: Client-dcf10d8d-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:58:40,639 - distributed.worker - INFO - Stopping worker at inproc://192.168.1.159/518557/671
-2022-08-26 13:58:40,639 - distributed.scheduler - INFO - Remove worker <WorkerState 'inproc://192.168.1.159/518557/671', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 13:58:40,639 - distributed.core - INFO - Removing comms to inproc://192.168.1.159/518557/671
-2022-08-26 13:58:40,639 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 13:58:40,640 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-d76f48d2-e63c-4aa6-91c6-5818ef9c0cda Address inproc://192.168.1.159/518557/671 Status: Status.closing
-2022-08-26 13:58:40,640 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 13:58:40,640 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/deploy/tests/test_local.py::test_local_tls_restart 2022-08-26 13:58:40,671 - distributed.scheduler - INFO - State start
-2022-08-26 13:58:40,672 - distributed.scheduler - INFO - Clear task state
-2022-08-26 13:58:40,673 - distributed.scheduler - INFO -   Scheduler at: tls://192.168.1.159:47771
-2022-08-26 13:58:40,673 - distributed.scheduler - INFO -   dashboard at:                    :40713
-2022-08-26 13:58:40,688 - distributed.nanny - INFO -         Start Nanny at: 'tls://192.168.1.159:46649'
-2022-08-26 13:58:41,113 - distributed.worker - INFO -       Start worker at:  tls://192.168.1.159:45371
-2022-08-26 13:58:41,113 - distributed.worker - INFO -          Listening to:  tls://192.168.1.159:45371
-2022-08-26 13:58:41,113 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 13:58:41,113 - distributed.worker - INFO -          dashboard at:        192.168.1.159:42795
-2022-08-26 13:58:41,113 - distributed.worker - INFO - Waiting to connect to:  tls://192.168.1.159:47771
-2022-08-26 13:58:41,113 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:41,113 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 13:58:41,113 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 13:58:41,113 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-wd0b37el
-2022-08-26 13:58:41,113 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:41,346 - distributed.scheduler - INFO - Register worker <WorkerState 'tls://192.168.1.159:45371', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 13:58:41,346 - distributed.scheduler - INFO - Starting worker compute stream, tls://192.168.1.159:45371
-2022-08-26 13:58:41,346 - distributed.core - INFO - Starting established connection
-2022-08-26 13:58:41,346 - distributed.worker - INFO -         Registered to:  tls://192.168.1.159:47771
-2022-08-26 13:58:41,346 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:41,347 - distributed.core - INFO - Starting established connection
-2022-08-26 13:58:41,357 - distributed.scheduler - INFO - Receive client connection: Client-dd60fcf7-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:58:41,357 - distributed.core - INFO - Starting established connection
-2022-08-26 13:58:41,557 - distributed.scheduler - INFO - Releasing all requested keys
-2022-08-26 13:58:41,558 - distributed.scheduler - INFO - Clear task state
-2022-08-26 13:58:41,562 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 13:58:41,563 - distributed.worker - INFO - Stopping worker at tls://192.168.1.159:45371
-2022-08-26 13:58:41,564 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-7eee2d8d-33a4-45d8-a673-ecb4bb8c35ad Address tls://192.168.1.159:45371 Status: Status.closing
-2022-08-26 13:58:41,564 - distributed.scheduler - INFO - Remove worker <WorkerState 'tls://192.168.1.159:45371', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 13:58:41,564 - distributed.core - INFO - Removing comms to tls://192.168.1.159:45371
-2022-08-26 13:58:41,564 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 13:58:41,726 - distributed.nanny - WARNING - Restarting worker
-2022-08-26 13:58:42,153 - distributed.worker - INFO -       Start worker at:  tls://192.168.1.159:34777
-2022-08-26 13:58:42,153 - distributed.worker - INFO -          Listening to:  tls://192.168.1.159:34777
-2022-08-26 13:58:42,153 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 13:58:42,153 - distributed.worker - INFO -          dashboard at:        192.168.1.159:39203
-2022-08-26 13:58:42,153 - distributed.worker - INFO - Waiting to connect to:  tls://192.168.1.159:47771
-2022-08-26 13:58:42,153 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:42,153 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 13:58:42,153 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 13:58:42,153 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-3lp9qk1y
-2022-08-26 13:58:42,153 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:42,387 - distributed.scheduler - INFO - Register worker <WorkerState 'tls://192.168.1.159:34777', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 13:58:42,388 - distributed.scheduler - INFO - Starting worker compute stream, tls://192.168.1.159:34777
-2022-08-26 13:58:42,388 - distributed.core - INFO - Starting established connection
-2022-08-26 13:58:42,388 - distributed.worker - INFO -         Registered to:  tls://192.168.1.159:47771
-2022-08-26 13:58:42,388 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:42,389 - distributed.core - INFO - Starting established connection
-2022-08-26 13:58:42,734 - distributed.scheduler - INFO - Remove client Client-dd60fcf7-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:58:42,734 - distributed.scheduler - INFO - Remove client Client-dd60fcf7-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:58:42,735 - distributed.scheduler - INFO - Close client connection: Client-dd60fcf7-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:58:42,736 - distributed.nanny - INFO - Closing Nanny at 'tls://192.168.1.159:46649'.
-2022-08-26 13:58:42,736 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 13:58:42,736 - distributed.worker - INFO - Stopping worker at tls://192.168.1.159:34777
-2022-08-26 13:58:42,737 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-0bb8f12c-048f-4275-9dbd-893580eec7cb Address tls://192.168.1.159:34777 Status: Status.closing
-2022-08-26 13:58:42,737 - distributed.scheduler - INFO - Remove worker <WorkerState 'tls://192.168.1.159:34777', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 13:58:42,737 - distributed.core - INFO - Removing comms to tls://192.168.1.159:34777
-2022-08-26 13:58:42,737 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 13:58:42,901 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 13:58:42,902 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/deploy/tests/test_local.py::test_asynchronous_property 2022-08-26 13:58:42,929 - distributed.scheduler - INFO - State start
-2022-08-26 13:58:42,931 - distributed.scheduler - INFO - Clear task state
-2022-08-26 13:58:42,931 - distributed.scheduler - INFO -   Scheduler at: inproc://192.168.1.159/518557/675
-2022-08-26 13:58:42,931 - distributed.scheduler - INFO -   dashboard at:           localhost:44405
-2022-08-26 13:58:42,941 - distributed.worker - INFO -       Start worker at: inproc://192.168.1.159/518557/678
-2022-08-26 13:58:42,941 - distributed.worker - INFO -          Listening to:        inproc192.168.1.159
-2022-08-26 13:58:42,941 - distributed.worker - INFO -           Worker name:                          2
-2022-08-26 13:58:42,941 - distributed.worker - INFO -          dashboard at:        192.168.1.159:39011
-2022-08-26 13:58:42,941 - distributed.worker - INFO - Waiting to connect to: inproc://192.168.1.159/518557/675
-2022-08-26 13:58:42,941 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:42,941 - distributed.worker - INFO -               Threads:                          3
-2022-08-26 13:58:42,941 - distributed.worker - INFO -                Memory:                  15.71 GiB
-2022-08-26 13:58:42,941 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-wml35syb
-2022-08-26 13:58:42,941 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:42,942 - distributed.worker - INFO -       Start worker at: inproc://192.168.1.159/518557/679
-2022-08-26 13:58:42,942 - distributed.worker - INFO -          Listening to:        inproc192.168.1.159
-2022-08-26 13:58:42,942 - distributed.worker - INFO -           Worker name:                          3
-2022-08-26 13:58:42,942 - distributed.worker - INFO -          dashboard at:        192.168.1.159:34619
-2022-08-26 13:58:42,942 - distributed.worker - INFO - Waiting to connect to: inproc://192.168.1.159/518557/675
-2022-08-26 13:58:42,942 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:42,942 - distributed.worker - INFO -               Threads:                          3
-2022-08-26 13:58:42,942 - distributed.worker - INFO -                Memory:                  15.71 GiB
-2022-08-26 13:58:42,942 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-yb5kq7lc
-2022-08-26 13:58:42,942 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:42,943 - distributed.worker - INFO -       Start worker at: inproc://192.168.1.159/518557/680
-2022-08-26 13:58:42,943 - distributed.worker - INFO -          Listening to:        inproc192.168.1.159
-2022-08-26 13:58:42,943 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 13:58:42,943 - distributed.worker - INFO -          dashboard at:        192.168.1.159:37623
-2022-08-26 13:58:42,943 - distributed.worker - INFO - Waiting to connect to: inproc://192.168.1.159/518557/675
-2022-08-26 13:58:42,943 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:42,943 - distributed.worker - INFO -               Threads:                          3
-2022-08-26 13:58:42,943 - distributed.worker - INFO -                Memory:                  15.71 GiB
-2022-08-26 13:58:42,943 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-xjex6g1s
-2022-08-26 13:58:42,943 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:42,944 - distributed.worker - INFO -       Start worker at: inproc://192.168.1.159/518557/681
-2022-08-26 13:58:42,944 - distributed.worker - INFO -          Listening to:        inproc192.168.1.159
-2022-08-26 13:58:42,944 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 13:58:42,944 - distributed.worker - INFO -          dashboard at:        192.168.1.159:34029
-2022-08-26 13:58:42,944 - distributed.worker - INFO - Waiting to connect to: inproc://192.168.1.159/518557/675
-2022-08-26 13:58:42,944 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:42,944 - distributed.worker - INFO -               Threads:                          3
-2022-08-26 13:58:42,944 - distributed.worker - INFO -                Memory:                  15.71 GiB
-2022-08-26 13:58:42,944 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-mjfbbaje
-2022-08-26 13:58:42,944 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:42,946 - distributed.scheduler - INFO - Register worker <WorkerState 'inproc://192.168.1.159/518557/678', name: 2, status: init, memory: 0, processing: 0>
-2022-08-26 13:58:42,946 - distributed.scheduler - INFO - Starting worker compute stream, inproc://192.168.1.159/518557/678
-2022-08-26 13:58:42,946 - distributed.core - INFO - Starting established connection
-2022-08-26 13:58:42,947 - distributed.scheduler - INFO - Register worker <WorkerState 'inproc://192.168.1.159/518557/679', name: 3, status: init, memory: 0, processing: 0>
-2022-08-26 13:58:42,947 - distributed.scheduler - INFO - Starting worker compute stream, inproc://192.168.1.159/518557/679
-2022-08-26 13:58:42,947 - distributed.core - INFO - Starting established connection
-2022-08-26 13:58:42,947 - distributed.scheduler - INFO - Register worker <WorkerState 'inproc://192.168.1.159/518557/680', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 13:58:42,947 - distributed.scheduler - INFO - Starting worker compute stream, inproc://192.168.1.159/518557/680
-2022-08-26 13:58:42,948 - distributed.core - INFO - Starting established connection
-2022-08-26 13:58:42,948 - distributed.scheduler - INFO - Register worker <WorkerState 'inproc://192.168.1.159/518557/681', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 13:58:42,948 - distributed.scheduler - INFO - Starting worker compute stream, inproc://192.168.1.159/518557/681
-2022-08-26 13:58:42,948 - distributed.core - INFO - Starting established connection
-2022-08-26 13:58:42,948 - distributed.worker - INFO -         Registered to: inproc://192.168.1.159/518557/675
-2022-08-26 13:58:42,948 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:42,948 - distributed.worker - INFO -         Registered to: inproc://192.168.1.159/518557/675
-2022-08-26 13:58:42,949 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:42,949 - distributed.worker - INFO -         Registered to: inproc://192.168.1.159/518557/675
-2022-08-26 13:58:42,949 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:42,949 - distributed.worker - INFO -         Registered to: inproc://192.168.1.159/518557/675
-2022-08-26 13:58:42,949 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:42,949 - distributed.core - INFO - Starting established connection
-2022-08-26 13:58:42,950 - distributed.core - INFO - Starting established connection
-2022-08-26 13:58:42,950 - distributed.core - INFO - Starting established connection
-2022-08-26 13:58:42,950 - distributed.core - INFO - Starting established connection
-2022-08-26 13:58:42,951 - distributed.worker - INFO - Stopping worker at inproc://192.168.1.159/518557/680
-2022-08-26 13:58:42,951 - distributed.worker - INFO - Stopping worker at inproc://192.168.1.159/518557/681
-2022-08-26 13:58:42,951 - distributed.worker - INFO - Stopping worker at inproc://192.168.1.159/518557/678
-2022-08-26 13:58:42,952 - distributed.worker - INFO - Stopping worker at inproc://192.168.1.159/518557/679
-2022-08-26 13:58:42,952 - distributed.scheduler - INFO - Remove worker <WorkerState 'inproc://192.168.1.159/518557/680', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 13:58:42,952 - distributed.core - INFO - Removing comms to inproc://192.168.1.159/518557/680
-2022-08-26 13:58:42,953 - distributed.scheduler - INFO - Remove worker <WorkerState 'inproc://192.168.1.159/518557/681', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 13:58:42,953 - distributed.core - INFO - Removing comms to inproc://192.168.1.159/518557/681
-2022-08-26 13:58:42,953 - distributed.scheduler - INFO - Remove worker <WorkerState 'inproc://192.168.1.159/518557/678', name: 2, status: closing, memory: 0, processing: 0>
-2022-08-26 13:58:42,953 - distributed.core - INFO - Removing comms to inproc://192.168.1.159/518557/678
-2022-08-26 13:58:42,953 - distributed.scheduler - INFO - Remove worker <WorkerState 'inproc://192.168.1.159/518557/679', name: 3, status: closing, memory: 0, processing: 0>
-2022-08-26 13:58:42,953 - distributed.core - INFO - Removing comms to inproc://192.168.1.159/518557/679
-2022-08-26 13:58:42,953 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 13:58:42,953 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-29896e9d-2d0e-4fd7-9e92-6f105ff6dec8 Address inproc://192.168.1.159/518557/680 Status: Status.closing
-2022-08-26 13:58:42,953 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-5cee86a9-e5b2-4cb3-b395-d78ad6e12e59 Address inproc://192.168.1.159/518557/681 Status: Status.closing
-2022-08-26 13:58:42,954 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-389fc550-578a-460e-86e3-f58809271409 Address inproc://192.168.1.159/518557/678 Status: Status.closing
-2022-08-26 13:58:42,954 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-db5675c6-bc31-4d4b-b902-8e10d15c9089 Address inproc://192.168.1.159/518557/679 Status: Status.closing
-2022-08-26 13:58:42,955 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 13:58:42,956 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/deploy/tests/test_local.py::test_protocol_inproc PASSED
-distributed/deploy/tests/test_local.py::test_protocol_tcp PASSED
-distributed/deploy/tests/test_local.py::test_protocol_ip PASSED
-distributed/deploy/tests/test_local.py::test_worker_class_worker PASSED
-distributed/deploy/tests/test_local.py::test_worker_class_nanny PASSED
-distributed/deploy/tests/test_local.py::test_worker_class_nanny_async PASSED
-distributed/deploy/tests/test_local.py::test_starts_up_sync PASSED
-distributed/deploy/tests/test_local.py::test_dont_select_closed_worker 2022-08-26 13:58:46,869 - distributed.client - ERROR - 
-ConnectionRefusedError: [Errno 111] Connection refused
-
-The above exception was the direct cause of the following exception:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/core.py", line 291, in connect
-    comm = await asyncio.wait_for(
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/tasks.py", line 445, in wait_for
-    return fut.result()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 496, in connect
-    convert_stream_closed_error(self, e)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 142, in convert_stream_closed_error
-    raise CommClosedError(f"in {obj}: {exc.__class__.__name__}: {exc}") from exc
-distributed.comm.core.CommClosedError: in <distributed.comm.tcp.TCPConnector object at 0x7f164c0e8690>: ConnectionRefusedError: [Errno 111] Connection refused
-
-During handling of the above exception, another exception occurred:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/utils.py", line 799, in wrapper
-    return await func(*args, **kwargs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/client.py", line 1246, in _reconnect
-    await self._ensure_connected(timeout=timeout)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/client.py", line 1276, in _ensure_connected
-    comm = await connect(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/core.py", line 315, in connect
-    await asyncio.sleep(backoff)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/tasks.py", line 605, in sleep
-    return await future
-asyncio.exceptions.CancelledError
-PASSED
-distributed/deploy/tests/test_local.py::test_client_cluster_synchronous PASSED
-distributed/deploy/tests/test_local.py::test_scale_memory_cores PASSED
-distributed/deploy/tests/test_local.py::test_repr[2 GiB] PASSED
-distributed/deploy/tests/test_local.py::test_repr[None] PASSED
-distributed/deploy/tests/test_local.py::test_threads_per_worker_set_to_0 PASSED
-distributed/deploy/tests/test_local.py::test_capture_security[True] PASSED
-distributed/deploy/tests/test_local.py::test_capture_security[False] 2022-08-26 13:58:47,278 - distributed.scheduler - INFO - State start
-2022-08-26 13:58:47,282 - distributed.scheduler - INFO - Clear task state
-2022-08-26 13:58:47,283 - distributed.scheduler - INFO -   Scheduler at: tls://192.168.1.159:45289
-2022-08-26 13:58:47,283 - distributed.scheduler - INFO -   dashboard at:                    :39621
-2022-08-26 13:58:47,299 - distributed.scheduler - INFO - Receive client connection: Client-e0ebed8e-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:58:47,300 - distributed.core - INFO - Starting established connection
-2022-08-26 13:58:47,311 - distributed.scheduler - INFO - Remove client Client-e0ebed8e-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:58:47,311 - distributed.scheduler - INFO - Remove client Client-e0ebed8e-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:58:47,312 - distributed.scheduler - INFO - Close client connection: Client-e0ebed8e-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:58:47,312 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 13:58:47,312 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/deploy/tests/test_local.py::test_no_dangling_asyncio_tasks PASSED
-distributed/deploy/tests/test_local.py::test_async_with PASSED
-distributed/deploy/tests/test_local.py::test_no_workers PASSED
-distributed/deploy/tests/test_local.py::test_cluster_names PASSED
-distributed/deploy/tests/test_local.py::test_local_cluster_redundant_kwarg[True] 2022-08-26 13:58:47,994 - distributed.nanny - ERROR - Failed to initialize Worker
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/nanny.py", line 853, in _run
-    worker = Worker(**worker_kwargs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 729, in __init__
-    ServerNode.__init__(
-TypeError: Server.__init__() got an unexpected keyword argument 'typo_kwarg'
-2022-08-26 13:58:48,026 - distributed.nanny - ERROR - Failed to start process
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/nanny.py", line 438, in instantiate
-    result = await self.process.start()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/nanny.py", line 695, in start
-    msg = await self._wait_until_connected(uid)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/nanny.py", line 823, in _wait_until_connected
-    raise msg["exception"]
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/nanny.py", line 853, in _run
-    worker = Worker(**worker_kwargs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 729, in __init__
-    ServerNode.__init__(
-TypeError: Server.__init__() got an unexpected keyword argument 'typo_kwarg'
-PASSED
-distributed/deploy/tests/test_local.py::test_local_cluster_redundant_kwarg[False] PASSED
-distributed/deploy/tests/test_local.py::test_cluster_info_sync PASSED
-distributed/deploy/tests/test_local.py::test_cluster_info_sync_is_robust_to_network_blips PASSED
-distributed/deploy/tests/test_local.py::test_cluster_host_used_throughout_cluster[True-None] PASSED
-distributed/deploy/tests/test_local.py::test_cluster_host_used_throughout_cluster[True-127.0.0.1] PASSED
-distributed/deploy/tests/test_local.py::test_cluster_host_used_throughout_cluster[False-None] PASSED
-distributed/deploy/tests/test_local.py::test_cluster_host_used_throughout_cluster[False-127.0.0.1] PASSED
-distributed/deploy/tests/test_local.py::test_connect_to_closed_cluster PASSED
-distributed/deploy/tests/test_local.py::test_localcluster_start_exception SKIPPED
-distributed/deploy/tests/test_local.py::test_localcluster_get_client PASSED
-distributed/deploy/tests/test_slow_adaptive.py::test_startup PASSED
-distributed/deploy/tests/test_slow_adaptive.py::test_scale_up_down PASSED
-distributed/deploy/tests/test_slow_adaptive.py::test_adaptive PASSED
-distributed/deploy/tests/test_spec_cluster.py::test_specification PASSED
-distributed/deploy/tests/test_spec_cluster.py::test_spec_sync PASSED
-distributed/deploy/tests/test_spec_cluster.py::test_loop_started_in_constructor FAILED
-distributed/deploy/tests/test_spec_cluster.py::test_loop_started_in_constructor ERROR
-distributed/deploy/tests/test_spec_cluster.py::test_repr PASSED
-distributed/deploy/tests/test_spec_cluster.py::test_scale PASSED
-distributed/deploy/tests/test_spec_cluster.py::test_adaptive_killed_worker SKIPPED
-distributed/deploy/tests/test_spec_cluster.py::test_unexpected_closed_worker PASSED
-distributed/deploy/tests/test_spec_cluster.py::test_restart 2022-08-26 13:58:53,548 - distributed.worker - INFO -       Start worker at:  tcp://192.168.1.159:39551
-2022-08-26 13:58:53,548 - distributed.worker - INFO -          Listening to:  tcp://192.168.1.159:39551
-2022-08-26 13:58:53,548 - distributed.worker - INFO -       Start worker at:  tcp://192.168.1.159:35485
-2022-08-26 13:58:53,548 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 13:58:53,548 - distributed.worker - INFO -          Listening to:  tcp://192.168.1.159:35485
-2022-08-26 13:58:53,548 - distributed.worker - INFO -          dashboard at:        192.168.1.159:44055
-2022-08-26 13:58:53,548 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 13:58:53,548 - distributed.worker - INFO - Waiting to connect to:  tcp://192.168.1.159:46061
-2022-08-26 13:58:53,548 - distributed.worker - INFO -          dashboard at:        192.168.1.159:36113
-2022-08-26 13:58:53,548 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:53,549 - distributed.worker - INFO - Waiting to connect to:  tcp://192.168.1.159:46061
-2022-08-26 13:58:53,549 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 13:58:53,549 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:53,549 - distributed.worker - INFO -                Memory:                   5.24 GiB
-2022-08-26 13:58:53,549 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 13:58:53,549 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-nu06jjxj
-2022-08-26 13:58:53,549 - distributed.worker - INFO -                Memory:                   5.24 GiB
-2022-08-26 13:58:53,549 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-kq0hnhsg
-2022-08-26 13:58:53,549 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:53,549 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:53,775 - distributed.worker - INFO -         Registered to:  tcp://192.168.1.159:46061
-2022-08-26 13:58:53,775 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:53,776 - distributed.core - INFO - Starting established connection
-2022-08-26 13:58:53,787 - distributed.worker - INFO -         Registered to:  tcp://192.168.1.159:46061
-2022-08-26 13:58:53,788 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:53,788 - distributed.core - INFO - Starting established connection
-2022-08-26 13:58:53,823 - distributed.worker - INFO - Stopping worker at tcp://192.168.1.159:35485
-2022-08-26 13:58:53,823 - distributed.worker - INFO - Stopping worker at tcp://192.168.1.159:39551
-2022-08-26 13:58:53,824 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-af0ac6c0-99a4-4fbe-abbf-708b72a05d75 Address tcp://192.168.1.159:35485 Status: Status.closing
-2022-08-26 13:58:53,824 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-ac704e59-464d-4f3e-a4e7-e32439a85d2f Address tcp://192.168.1.159:39551 Status: Status.closing
-2022-08-26 13:58:53,952 - distributed.nanny - WARNING - Restarting worker
-2022-08-26 13:58:53,977 - distributed.nanny - WARNING - Restarting worker
-2022-08-26 13:58:54,390 - distributed.worker - INFO -       Start worker at:  tcp://192.168.1.159:33429
-2022-08-26 13:58:54,390 - distributed.worker - INFO -          Listening to:  tcp://192.168.1.159:33429
-2022-08-26 13:58:54,390 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 13:58:54,390 - distributed.worker - INFO -          dashboard at:        192.168.1.159:35119
-2022-08-26 13:58:54,390 - distributed.worker - INFO - Waiting to connect to:  tcp://192.168.1.159:46061
-2022-08-26 13:58:54,390 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:54,390 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 13:58:54,391 - distributed.worker - INFO -                Memory:                   5.24 GiB
-2022-08-26 13:58:54,391 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-eu0mfyaa
-2022-08-26 13:58:54,391 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:54,429 - distributed.worker - INFO -       Start worker at:  tcp://192.168.1.159:45905
-2022-08-26 13:58:54,429 - distributed.worker - INFO -          Listening to:  tcp://192.168.1.159:45905
-2022-08-26 13:58:54,429 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 13:58:54,429 - distributed.worker - INFO -          dashboard at:        192.168.1.159:35757
-2022-08-26 13:58:54,429 - distributed.worker - INFO - Waiting to connect to:  tcp://192.168.1.159:46061
-2022-08-26 13:58:54,429 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:54,429 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 13:58:54,429 - distributed.worker - INFO -                Memory:                   5.24 GiB
-2022-08-26 13:58:54,429 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-kltpgqkp
-2022-08-26 13:58:54,429 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:54,614 - distributed.worker - INFO -         Registered to:  tcp://192.168.1.159:46061
-2022-08-26 13:58:54,614 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:54,615 - distributed.core - INFO - Starting established connection
-2022-08-26 13:58:54,652 - distributed.worker - INFO -         Registered to:  tcp://192.168.1.159:46061
-2022-08-26 13:58:54,652 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:54,653 - distributed.core - INFO - Starting established connection
-2022-08-26 13:58:54,786 - distributed.worker - INFO - Stopping worker at tcp://192.168.1.159:45905
-2022-08-26 13:58:54,786 - distributed.worker - INFO - Stopping worker at tcp://192.168.1.159:33429
-2022-08-26 13:58:54,787 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-8242d209-43b9-4408-b14b-0dc950b34676 Address tcp://192.168.1.159:45905 Status: Status.closing
-2022-08-26 13:58:54,787 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-15dbd702-c607-4331-a0d8-4abdcbbd0aa9 Address tcp://192.168.1.159:33429 Status: Status.closing
-PASSED
-distributed/deploy/tests/test_spec_cluster.py::test_broken_worker 2022-08-26 13:58:54,996 - distributed.worker - ERROR - 
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/utils.py", line 799, in wrapper
-    return await func(*args, **kwargs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 1443, in close
-    await self.finished()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 447, in finished
-    await self._event_finished.wait()
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/locks.py", line 214, in wait
-    await fut
-asyncio.exceptions.CancelledError
-PASSED
-distributed/deploy/tests/test_spec_cluster.py::test_spec_close_clusters PASSED
-distributed/deploy/tests/test_spec_cluster.py::test_new_worker_spec PASSED
-distributed/deploy/tests/test_spec_cluster.py::test_nanny_port 2022-08-26 13:58:55,535 - distributed.worker - INFO -       Start worker at:  tcp://192.168.1.159:46709
-2022-08-26 13:58:55,535 - distributed.worker - INFO -          Listening to:  tcp://192.168.1.159:46709
-2022-08-26 13:58:55,535 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 13:58:55,535 - distributed.worker - INFO -          dashboard at:        192.168.1.159:45919
-2022-08-26 13:58:55,535 - distributed.worker - INFO - Waiting to connect to:  tcp://192.168.1.159:42915
-2022-08-26 13:58:55,535 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:55,535 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 13:58:55,535 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 13:58:55,535 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-zzdxl2ld
-2022-08-26 13:58:55,535 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:55,768 - distributed.worker - INFO -         Registered to:  tcp://192.168.1.159:42915
-2022-08-26 13:58:55,769 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:58:55,769 - distributed.core - INFO - Starting established connection
-2022-08-26 13:58:55,814 - distributed.worker - INFO - Stopping worker at tcp://192.168.1.159:46709
-2022-08-26 13:58:55,815 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-291d63ad-0145-48e4-a9dd-bd5e72a52544 Address tcp://192.168.1.159:46709 Status: Status.closing
-PASSED
-distributed/deploy/tests/test_spec_cluster.py::test_spec_process PASSED
-distributed/deploy/tests/test_spec_cluster.py::test_logs PASSED
-distributed/deploy/tests/test_spec_cluster.py::test_scheduler_info PASSED
-distributed/deploy/tests/test_spec_cluster.py::test_dashboard_link PASSED
-distributed/deploy/tests/test_spec_cluster.py::test_widget PASSED
-distributed/deploy/tests/test_spec_cluster.py::test_scale_cores_memory PASSED
-distributed/deploy/tests/test_spec_cluster.py::test_ProcessInterfaceValid PASSED
-distributed/deploy/tests/test_spec_cluster.py::test_MultiWorker PASSED
-distributed/deploy/tests/test_spec_cluster.py::test_run_spec PASSED
-distributed/deploy/tests/test_spec_cluster.py::test_run_spec_cluster_worker_names PASSED
-distributed/deploy/tests/test_spec_cluster.py::test_bad_close PASSED
-distributed/diagnostics/tests/test_cluster_dump_plugin.py::test_cluster_dump_plugin 2022-08-26 13:58:57,927 - distributed.client - ERROR - 
-ConnectionRefusedError: [Errno 111] Connection refused
-
-The above exception was the direct cause of the following exception:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/core.py", line 291, in connect
-    comm = await asyncio.wait_for(
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/tasks.py", line 445, in wait_for
-    return fut.result()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 496, in connect
-    convert_stream_closed_error(self, e)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 142, in convert_stream_closed_error
-    raise CommClosedError(f"in {obj}: {exc.__class__.__name__}: {exc}") from exc
-distributed.comm.core.CommClosedError: in <distributed.comm.tcp.TCPConnector object at 0x56403d9096b0>: ConnectionRefusedError: [Errno 111] Connection refused
-
-During handling of the above exception, another exception occurred:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/utils.py", line 799, in wrapper
-    return await func(*args, **kwargs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/client.py", line 1246, in _reconnect
-    await self._ensure_connected(timeout=timeout)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/client.py", line 1276, in _ensure_connected
-    comm = await connect(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/core.py", line 315, in connect
-    await asyncio.sleep(backoff)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/tasks.py", line 605, in sleep
-    return await future
-asyncio.exceptions.CancelledError
-PASSED
-distributed/diagnostics/tests/test_eventstream.py::test_eventstream 2022-08-26 13:58:58,131 - distributed.worker - WARNING - Compute Failed
-Key:       div-beaac0206246b34d3625d21194e03c13
-Function:  div
-args:      (1, 0)
-kwargs:    {}
-Exception: "ZeroDivisionError('division by zero')"
-
-PASSED
-distributed/diagnostics/tests/test_eventstream.py::test_eventstream_remote 2022-08-26 13:58:58,363 - distributed.worker - WARNING - Compute Failed
-Key:       div-beaac0206246b34d3625d21194e03c13
-Function:  div
-args:      (1, 0)
-kwargs:    {}
-Exception: "ZeroDivisionError('division by zero')"
-
-PASSED
-distributed/diagnostics/tests/test_graph_layout.py::test_basic PASSED
-distributed/diagnostics/tests/test_graph_layout.py::test_construct_after_call PASSED
-distributed/diagnostics/tests/test_graph_layout.py::test_states PASSED
-distributed/diagnostics/tests/test_graph_layout.py::test_release_tasks PASSED
-distributed/diagnostics/tests/test_graph_layout.py::test_forget PASSED
-distributed/diagnostics/tests/test_graph_layout.py::test_unique_positions PASSED
-distributed/diagnostics/tests/test_memory_sampler.py::test_async PASSED
-distributed/diagnostics/tests/test_memory_sampler.py::test_sync 2022-08-26 13:59:01,235 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 13:59:01,237 - distributed.scheduler - INFO - State start
-2022-08-26 13:59:01,240 - distributed.scheduler - INFO - Clear task state
-2022-08-26 13:59:01,240 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:38735
-2022-08-26 13:59:01,240 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 13:59:01,247 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:41037
-2022-08-26 13:59:01,247 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:41037
-2022-08-26 13:59:01,247 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41333
-2022-08-26 13:59:01,247 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:38735
-2022-08-26 13:59:01,247 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:01,247 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 13:59:01,247 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 13:59:01,247 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-uu015my3
-2022-08-26 13:59:01,247 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:01,247 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:46243
-2022-08-26 13:59:01,247 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:46243
-2022-08-26 13:59:01,247 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34871
-2022-08-26 13:59:01,247 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:38735
-2022-08-26 13:59:01,247 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:01,247 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 13:59:01,247 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 13:59:01,247 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-qtgsav3e
-2022-08-26 13:59:01,247 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:01,492 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:41037', status: init, memory: 0, processing: 0>
-2022-08-26 13:59:01,713 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:41037
-2022-08-26 13:59:01,713 - distributed.core - INFO - Starting established connection
-2022-08-26 13:59:01,713 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:38735
-2022-08-26 13:59:01,713 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:01,714 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:46243', status: init, memory: 0, processing: 0>
-2022-08-26 13:59:01,714 - distributed.core - INFO - Starting established connection
-2022-08-26 13:59:01,714 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:46243
-2022-08-26 13:59:01,715 - distributed.core - INFO - Starting established connection
-2022-08-26 13:59:01,715 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:38735
-2022-08-26 13:59:01,715 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:01,716 - distributed.core - INFO - Starting established connection
-2022-08-26 13:59:01,721 - distributed.scheduler - INFO - Receive client connection: Client-e9852be5-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:59:01,721 - distributed.core - INFO - Starting established connection
-PASSED2022-08-26 13:59:02,249 - distributed.scheduler - INFO - Remove client Client-e9852be5-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:59:02,250 - distributed.scheduler - INFO - Remove client Client-e9852be5-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:59:02,250 - distributed.scheduler - INFO - Close client connection: Client-e9852be5-2581-11ed-a99d-00d861bc4509
-
-distributed/diagnostics/tests/test_memory_sampler.py::test_at_least_one_sample PASSED
-distributed/diagnostics/tests/test_memory_sampler.py::test_multi_sample SKIPPED
-distributed/diagnostics/tests/test_memory_sampler.py::test_pandas[False] PASSED
-distributed/diagnostics/tests/test_memory_sampler.py::test_pandas[True] PASSED
-distributed/diagnostics/tests/test_memory_sampler.py::test_pandas_multiseries[False] SKIPPED
-distributed/diagnostics/tests/test_memory_sampler.py::test_pandas_multiseries[True] SKIPPED
-distributed/diagnostics/tests/test_progress.py::test_many_Progress PASSED
-distributed/diagnostics/tests/test_progress.py::test_multiprogress PASSED
-distributed/diagnostics/tests/test_progress.py::test_robust_to_bad_plugin PASSED
-distributed/diagnostics/tests/test_progress.py::test_AllProgress 2022-08-26 13:59:09,835 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40653
-2022-08-26 13:59:09,835 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36607
-2022-08-26 13:59:09,835 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40653
-2022-08-26 13:59:09,835 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36607
-2022-08-26 13:59:09,835 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 13:59:09,835 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 13:59:09,835 - distributed.worker - INFO -          dashboard at:            127.0.0.1:44689
-2022-08-26 13:59:09,835 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33563
-2022-08-26 13:59:09,835 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:42059
-2022-08-26 13:59:09,835 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:42059
-2022-08-26 13:59:09,835 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:09,835 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:09,835 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 13:59:09,835 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 13:59:09,835 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 13:59:09,835 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 13:59:09,835 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-2yqtl5ko
-2022-08-26 13:59:09,835 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-fyh6n6c5
-2022-08-26 13:59:09,835 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:09,835 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:10,063 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:42059
-2022-08-26 13:59:10,064 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:10,064 - distributed.core - INFO - Starting established connection
-2022-08-26 13:59:10,074 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:42059
-2022-08-26 13:59:10,074 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:10,075 - distributed.core - INFO - Starting established connection
-2022-08-26 13:59:10,389 - distributed.worker - WARNING - Compute Failed
-Key:       div-beaac0206246b34d3625d21194e03c13
-Function:  div
-args:      (1, 0)
-kwargs:    {}
-Exception: "ZeroDivisionError('division by zero')"
-
-2022-08-26 13:59:11,419 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36607
-2022-08-26 13:59:11,419 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:40653
-2022-08-26 13:59:11,420 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-7e5bb8d5-b39d-4700-94fe-506bcb9016a3 Address tcp://127.0.0.1:36607 Status: Status.closing
-2022-08-26 13:59:11,420 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-c64164ba-db53-4348-89e7-0ea0a7cff7cc Address tcp://127.0.0.1:40653 Status: Status.closing
-2022-08-26 13:59:11,625 - distributed.nanny - WARNING - Restarting worker
-2022-08-26 13:59:11,628 - distributed.nanny - WARNING - Restarting worker
-2022-08-26 13:59:12,081 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33715
-2022-08-26 13:59:12,081 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33715
-2022-08-26 13:59:12,081 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 13:59:12,081 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42839
-2022-08-26 13:59:12,081 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:42059
-2022-08-26 13:59:12,081 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:12,081 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 13:59:12,081 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 13:59:12,081 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-e5ze1q3d
-2022-08-26 13:59:12,081 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:12,084 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:46391
-2022-08-26 13:59:12,084 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:46391
-2022-08-26 13:59:12,084 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 13:59:12,084 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35571
-2022-08-26 13:59:12,084 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:42059
-2022-08-26 13:59:12,085 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:12,085 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 13:59:12,085 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 13:59:12,085 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-0tvccakd
-2022-08-26 13:59:12,085 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:12,311 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:42059
-2022-08-26 13:59:12,312 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:12,312 - distributed.core - INFO - Starting established connection
-2022-08-26 13:59:12,325 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:42059
-2022-08-26 13:59:12,326 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:12,326 - distributed.core - INFO - Starting established connection
-2022-08-26 13:59:12,636 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:33715
-2022-08-26 13:59:12,636 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:46391
-2022-08-26 13:59:12,637 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-64f1b23d-65f8-4eae-a8d4-36e4c7c094d6 Address tcp://127.0.0.1:33715 Status: Status.closing
-2022-08-26 13:59:12,637 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-84823893-b998-4f97-bcf0-cf9247940696 Address tcp://127.0.0.1:46391 Status: Status.closing
-PASSED
-distributed/diagnostics/tests/test_progress.py::test_AllProgress_lost_key 2022-08-26 13:59:13,461 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45963
-2022-08-26 13:59:13,461 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45963
-2022-08-26 13:59:13,462 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 13:59:13,462 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42249
-2022-08-26 13:59:13,462 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34165
-2022-08-26 13:59:13,462 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:13,462 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 13:59:13,462 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 13:59:13,462 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-j_txxzd3
-2022-08-26 13:59:13,462 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:13,476 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45069
-2022-08-26 13:59:13,476 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45069
-2022-08-26 13:59:13,476 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 13:59:13,476 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35269
-2022-08-26 13:59:13,476 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34165
-2022-08-26 13:59:13,476 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:13,476 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 13:59:13,476 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 13:59:13,476 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ohfcyhm_
-2022-08-26 13:59:13,476 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:13,703 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34165
-2022-08-26 13:59:13,703 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:13,703 - distributed.core - INFO - Starting established connection
-2022-08-26 13:59:13,706 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34165
-2022-08-26 13:59:13,706 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:13,707 - distributed.core - INFO - Starting established connection
-2022-08-26 13:59:13,936 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:45963
-2022-08-26 13:59:13,937 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-613294c1-df14-4f3e-87f1-23dd95462c72 Address tcp://127.0.0.1:45963 Status: Status.closing
-2022-08-26 13:59:14,103 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:45069
-2022-08-26 13:59:14,103 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-5b6a334f-6a7d-40db-b7ab-7dec8a0bc7d9 Address tcp://127.0.0.1:45069 Status: Status.closing
-PASSED
-distributed/diagnostics/tests/test_progress.py::test_group_timing 2022-08-26 13:59:14,909 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45581
-2022-08-26 13:59:14,909 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45581
-2022-08-26 13:59:14,909 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 13:59:14,909 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46733
-2022-08-26 13:59:14,909 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:41505
-2022-08-26 13:59:14,909 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:14,909 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 13:59:14,909 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 13:59:14,909 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-vpgtslb0
-2022-08-26 13:59:14,910 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:14,919 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38571
-2022-08-26 13:59:14,919 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38571
-2022-08-26 13:59:14,919 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 13:59:14,919 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36657
-2022-08-26 13:59:14,919 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:41505
-2022-08-26 13:59:14,919 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:14,919 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 13:59:14,919 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 13:59:14,919 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-6cjin8uj
-2022-08-26 13:59:14,920 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:15,147 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:41505
-2022-08-26 13:59:15,147 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:15,148 - distributed.core - INFO - Starting established connection
-2022-08-26 13:59:15,158 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:41505
-2022-08-26 13:59:15,158 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:15,158 - distributed.core - INFO - Starting established connection
-2022-08-26 13:59:17,486 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:38571
-2022-08-26 13:59:17,487 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:45581
-2022-08-26 13:59:17,488 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-6d303e1f-642a-4297-90ae-ca16eb297c9e Address tcp://127.0.0.1:45581 Status: Status.closing
-2022-08-26 13:59:17,488 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-dde005be-01f8-4e33-b385-8922a53121ec Address tcp://127.0.0.1:38571 Status: Status.closing
-2022-08-26 13:59:17,691 - distributed.nanny - WARNING - Restarting worker
-2022-08-26 13:59:17,692 - distributed.nanny - WARNING - Restarting worker
-2022-08-26 13:59:18,159 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40181
-2022-08-26 13:59:18,159 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40181
-2022-08-26 13:59:18,159 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 13:59:18,159 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46415
-2022-08-26 13:59:18,159 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:41505
-2022-08-26 13:59:18,159 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:18,159 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 13:59:18,160 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 13:59:18,160 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-z7vqrwnb
-2022-08-26 13:59:18,160 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:18,162 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42135
-2022-08-26 13:59:18,162 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42135
-2022-08-26 13:59:18,162 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 13:59:18,162 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35581
-2022-08-26 13:59:18,162 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:41505
-2022-08-26 13:59:18,162 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:18,162 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 13:59:18,162 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 13:59:18,162 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-xm_82tas
-2022-08-26 13:59:18,162 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:18,392 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:41505
-2022-08-26 13:59:18,392 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:18,392 - distributed.core - INFO - Starting established connection
-2022-08-26 13:59:18,411 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:41505
-2022-08-26 13:59:18,412 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:18,412 - distributed.core - INFO - Starting established connection
-2022-08-26 13:59:18,501 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:40181
-2022-08-26 13:59:18,501 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42135
-2022-08-26 13:59:18,502 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-57368753-87b9-42f5-b7c4-8cea1bb24835 Address tcp://127.0.0.1:40181 Status: Status.closing
-2022-08-26 13:59:18,502 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-88575a2d-9287-44b1-9031-825bf6263bd4 Address tcp://127.0.0.1:42135 Status: Status.closing
-PASSED
-distributed/diagnostics/tests/test_progress_stream.py::test_progress_quads PASSED
-distributed/diagnostics/tests/test_progress_stream.py::test_progress_quads_too_many PASSED
-distributed/diagnostics/tests/test_progress_stream.py::test_progress_stream 2022-08-26 13:59:18,908 - distributed.worker - WARNING - Compute Failed
-Key:       div-beaac0206246b34d3625d21194e03c13
-Function:  div
-args:      (1, 0)
-kwargs:    {}
-Exception: "ZeroDivisionError('division by zero')"
-
-PASSED
-distributed/diagnostics/tests/test_progress_stream.py::test_progress_quads_many_functions PASSED
-distributed/diagnostics/tests/test_progressbar.py::test_text_progressbar 2022-08-26 13:59:19,780 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 13:59:19,782 - distributed.scheduler - INFO - State start
-2022-08-26 13:59:19,785 - distributed.scheduler - INFO - Clear task state
-2022-08-26 13:59:19,785 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:45921
-2022-08-26 13:59:19,785 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 13:59:19,798 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39349
-2022-08-26 13:59:19,798 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39349
-2022-08-26 13:59:19,798 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39813
-2022-08-26 13:59:19,798 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:45921
-2022-08-26 13:59:19,798 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:19,798 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 13:59:19,798 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 13:59:19,798 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-9s3wmswq
-2022-08-26 13:59:19,798 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:19,802 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43321
-2022-08-26 13:59:19,802 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43321
-2022-08-26 13:59:19,802 - distributed.worker - INFO -          dashboard at:            127.0.0.1:38407
-2022-08-26 13:59:19,802 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:45921
-2022-08-26 13:59:19,803 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:19,803 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 13:59:19,803 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 13:59:19,803 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ekkyhoko
-2022-08-26 13:59:19,803 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:20,038 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:43321', status: init, memory: 0, processing: 0>
-2022-08-26 13:59:20,269 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:43321
-2022-08-26 13:59:20,269 - distributed.core - INFO - Starting established connection
-2022-08-26 13:59:20,269 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:45921
-2022-08-26 13:59:20,269 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:20,270 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39349', status: init, memory: 0, processing: 0>
-2022-08-26 13:59:20,270 - distributed.core - INFO - Starting established connection
-2022-08-26 13:59:20,270 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39349
-2022-08-26 13:59:20,270 - distributed.core - INFO - Starting established connection
-2022-08-26 13:59:20,270 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:45921
-2022-08-26 13:59:20,271 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:20,271 - distributed.core - INFO - Starting established connection
-2022-08-26 13:59:20,276 - distributed.scheduler - INFO - Receive client connection: Client-f4948d68-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:59:20,276 - distributed.core - INFO - Starting established connection
-PASSED2022-08-26 13:59:20,348 - distributed.scheduler - INFO - Remove client Client-f4948d68-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:59:20,349 - distributed.scheduler - INFO - Remove client Client-f4948d68-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:59:20,349 - distributed.scheduler - INFO - Close client connection: Client-f4948d68-2581-11ed-a99d-00d861bc4509
-
-distributed/diagnostics/tests/test_progressbar.py::test_TextProgressBar_error 2022-08-26 13:59:20,403 - distributed.worker - WARNING - Compute Failed
-Key:       div-beaac0206246b34d3625d21194e03c13
-Function:  div
-args:      (1, 0)
-kwargs:    {}
-Exception: "ZeroDivisionError('division by zero')"
-
-[                                        ] | 0% Completed |  0.1s[                                        ] | 0% Completed |  0.0sPASSED
-distributed/diagnostics/tests/test_progressbar.py::test_TextProgressBar_empty 2022-08-26 13:59:20,668 - distributed.core - ERROR - 
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/utils.py", line 799, in wrapper
-    return await func(*args, **kwargs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/scheduler.py", line 6368, in feed
-    await asyncio.sleep(interval)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/tasks.py", line 605, in sleep
-    return await future
-asyncio.exceptions.CancelledError
-PASSED
-distributed/diagnostics/tests/test_progressbar.py::test_progress_function 2022-08-26 13:59:21,547 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 13:59:21,549 - distributed.scheduler - INFO - State start
-2022-08-26 13:59:21,552 - distributed.scheduler - INFO - Clear task state
-2022-08-26 13:59:21,552 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:46421
-2022-08-26 13:59:21,552 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 13:59:21,560 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33435
-2022-08-26 13:59:21,560 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33435
-2022-08-26 13:59:21,560 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40373
-2022-08-26 13:59:21,560 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:46421
-2022-08-26 13:59:21,560 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:21,560 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 13:59:21,560 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 13:59:21,560 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ighj5hja
-2022-08-26 13:59:21,560 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:21,561 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42179
-2022-08-26 13:59:21,561 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42179
-2022-08-26 13:59:21,562 - distributed.worker - INFO -          dashboard at:            127.0.0.1:38141
-2022-08-26 13:59:21,562 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:46421
-2022-08-26 13:59:21,562 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:21,562 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 13:59:21,562 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 13:59:21,562 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-gpfinx5d
-2022-08-26 13:59:21,562 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:21,814 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42179', status: init, memory: 0, processing: 0>
-2022-08-26 13:59:22,041 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42179
-2022-08-26 13:59:22,041 - distributed.core - INFO - Starting established connection
-2022-08-26 13:59:22,041 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:46421
-2022-08-26 13:59:22,041 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:22,042 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33435', status: init, memory: 0, processing: 0>
-2022-08-26 13:59:22,042 - distributed.core - INFO - Starting established connection
-2022-08-26 13:59:22,042 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33435
-2022-08-26 13:59:22,042 - distributed.core - INFO - Starting established connection
-2022-08-26 13:59:22,043 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:46421
-2022-08-26 13:59:22,043 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:22,044 - distributed.core - INFO - Starting established connection
-2022-08-26 13:59:22,049 - distributed.scheduler - INFO - Receive client connection: Client-f5a2f74a-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:59:22,049 - distributed.core - INFO - Starting established connection
-PASSED2022-08-26 13:59:22,119 - distributed.scheduler - INFO - Remove client Client-f5a2f74a-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:59:22,119 - distributed.scheduler - INFO - Remove client Client-f5a2f74a-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:59:22,119 - distributed.scheduler - INFO - Close client connection: Client-f5a2f74a-2581-11ed-a99d-00d861bc4509
-
-distributed/diagnostics/tests/test_progressbar.py::test_progress_function_w_kwargs 2022-08-26 13:59:22,780 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 13:59:22,782 - distributed.scheduler - INFO - State start
-2022-08-26 13:59:22,785 - distributed.scheduler - INFO - Clear task state
-2022-08-26 13:59:22,785 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:35257
-2022-08-26 13:59:22,785 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 13:59:22,788 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-gpfinx5d', purging
-2022-08-26 13:59:22,788 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-ighj5hja', purging
-2022-08-26 13:59:22,793 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45659
-2022-08-26 13:59:22,793 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45659
-2022-08-26 13:59:22,793 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40011
-2022-08-26 13:59:22,793 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35257
-2022-08-26 13:59:22,793 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:22,793 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 13:59:22,793 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 13:59:22,793 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-nk5pgeh0
-2022-08-26 13:59:22,793 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:22,794 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43501
-2022-08-26 13:59:22,795 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43501
-2022-08-26 13:59:22,795 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46341
-2022-08-26 13:59:22,795 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35257
-2022-08-26 13:59:22,795 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:22,795 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 13:59:22,795 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 13:59:22,795 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ujy5937y
-2022-08-26 13:59:22,795 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:23,024 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45659', status: init, memory: 0, processing: 0>
-2022-08-26 13:59:23,252 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45659
-2022-08-26 13:59:23,252 - distributed.core - INFO - Starting established connection
-2022-08-26 13:59:23,252 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35257
-2022-08-26 13:59:23,252 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:23,253 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:43501', status: init, memory: 0, processing: 0>
-2022-08-26 13:59:23,253 - distributed.core - INFO - Starting established connection
-2022-08-26 13:59:23,253 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:43501
-2022-08-26 13:59:23,253 - distributed.core - INFO - Starting established connection
-2022-08-26 13:59:23,254 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35257
-2022-08-26 13:59:23,254 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:23,255 - distributed.core - INFO - Starting established connection
-2022-08-26 13:59:23,260 - distributed.scheduler - INFO - Receive client connection: Client-f65bd410-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:59:23,260 - distributed.core - INFO - Starting established connection
-PASSED2022-08-26 13:59:23,328 - distributed.scheduler - INFO - Remove client Client-f65bd410-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:59:23,329 - distributed.scheduler - INFO - Remove client Client-f65bd410-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:59:23,329 - distributed.scheduler - INFO - Close client connection: Client-f65bd410-2581-11ed-a99d-00d861bc4509
-
-distributed/diagnostics/tests/test_progressbar.py::test_deprecated_loop_properties PASSED
-distributed/diagnostics/tests/test_scheduler_plugin.py::test_simple PASSED
-distributed/diagnostics/tests/test_scheduler_plugin.py::test_add_remove_worker PASSED
-distributed/diagnostics/tests/test_scheduler_plugin.py::test_async_add_remove_worker PASSED
-distributed/diagnostics/tests/test_scheduler_plugin.py::test_lifecycle PASSED
-distributed/diagnostics/tests/test_scheduler_plugin.py::test_register_scheduler_plugin 2022-08-26 13:59:24,209 - distributed.core - ERROR - Exception while handling op register_scheduler_plugin
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 770, in _handle_comm
-    result = await result
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/scheduler.py", line 4973, in register_scheduler_plugin
-    result = plugin.start(self)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/diagnostics/tests/test_scheduler_plugin.py", line 161, in start
-    raise RuntimeError("raising in start method")
-RuntimeError: raising in start method
-PASSED
-distributed/diagnostics/tests/test_scheduler_plugin.py::test_register_scheduler_plugin_pickle_disabled 2022-08-26 13:59:24,422 - distributed.core - ERROR - Exception while handling op register_scheduler_plugin
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 770, in _handle_comm
-    result = await result
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/scheduler.py", line 4957, in register_scheduler_plugin
-    raise ValueError(
-ValueError: Cannot register a scheduler plugin as the scheduler has been explicitly disallowed from deserializing arbitrary bytestrings using pickle via the 'distributed.scheduler.pickle' configuration setting.
-PASSED
-distributed/diagnostics/tests/test_scheduler_plugin.py::test_log_event_plugin PASSED
-distributed/diagnostics/tests/test_scheduler_plugin.py::test_register_plugin_on_scheduler PASSED
-distributed/diagnostics/tests/test_scheduler_plugin.py::test_closing_errors_ok 2022-08-26 13:59:25,067 - distributed.scheduler - ERROR - Plugin call failed during scheduler.close
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/scheduler.py", line 3447, in log_errors
-    await func()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/diagnostics/tests/test_scheduler_plugin.py", line 229, in before_close
-    raise Exception("BEFORE_CLOSE")
-Exception: BEFORE_CLOSE
-2022-08-26 13:59:25,067 - distributed.scheduler - ERROR - Plugin call failed during scheduler.close
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/scheduler.py", line 3447, in log_errors
-    await func()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/diagnostics/tests/test_scheduler_plugin.py", line 232, in close
-    raise Exception("AFTER_CLOSE")
-Exception: AFTER_CLOSE
-2022-08-26 13:59:25,172 - distributed.client - ERROR - 
-ConnectionRefusedError: [Errno 111] Connection refused
-
-The above exception was the direct cause of the following exception:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/core.py", line 291, in connect
-    comm = await asyncio.wait_for(
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/tasks.py", line 445, in wait_for
-    return fut.result()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 496, in connect
-    convert_stream_closed_error(self, e)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 142, in convert_stream_closed_error
-    raise CommClosedError(f"in {obj}: {exc.__class__.__name__}: {exc}") from exc
-distributed.comm.core.CommClosedError: in <distributed.comm.tcp.TCPConnector object at 0x56403dbc72e0>: ConnectionRefusedError: [Errno 111] Connection refused
-
-During handling of the above exception, another exception occurred:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/utils.py", line 799, in wrapper
-    return await func(*args, **kwargs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/client.py", line 1246, in _reconnect
-    await self._ensure_connected(timeout=timeout)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/client.py", line 1276, in _ensure_connected
-    comm = await connect(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/core.py", line 315, in connect
-    await asyncio.sleep(backoff)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/tasks.py", line 605, in sleep
-    return await future
-asyncio.exceptions.CancelledError
-PASSED
-distributed/diagnostics/tests/test_task_stream.py::test_TaskStreamPlugin 2022-08-26 13:59:25,390 - distributed.worker - WARNING - Compute Failed
-Key:       div-beaac0206246b34d3625d21194e03c13
-Function:  div
-args:      (1, 0)
-kwargs:    {}
-Exception: "ZeroDivisionError('division by zero')"
-
-PASSED
-distributed/diagnostics/tests/test_task_stream.py::test_maxlen PASSED
-distributed/diagnostics/tests/test_task_stream.py::test_collect PASSED
-distributed/diagnostics/tests/test_task_stream.py::test_no_startstops PASSED
-distributed/diagnostics/tests/test_task_stream.py::test_client PASSED
-distributed/diagnostics/tests/test_task_stream.py::test_client_sync 2022-08-26 13:59:27,956 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 13:59:27,958 - distributed.scheduler - INFO - State start
-2022-08-26 13:59:27,961 - distributed.scheduler - INFO - Clear task state
-2022-08-26 13:59:27,961 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:44767
-2022-08-26 13:59:27,961 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 13:59:27,968 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40431
-2022-08-26 13:59:27,968 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40431
-2022-08-26 13:59:27,968 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42173
-2022-08-26 13:59:27,968 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44767
-2022-08-26 13:59:27,968 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:27,968 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42969
-2022-08-26 13:59:27,968 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 13:59:27,968 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42969
-2022-08-26 13:59:27,969 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 13:59:27,969 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-wcm0eikl
-2022-08-26 13:59:27,969 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40377
-2022-08-26 13:59:27,969 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44767
-2022-08-26 13:59:27,969 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:27,969 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:27,969 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 13:59:27,969 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 13:59:27,969 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-d6jhb0kt
-2022-08-26 13:59:27,970 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:28,195 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42969', status: init, memory: 0, processing: 0>
-2022-08-26 13:59:28,413 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42969
-2022-08-26 13:59:28,413 - distributed.core - INFO - Starting established connection
-2022-08-26 13:59:28,413 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44767
-2022-08-26 13:59:28,414 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:28,414 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40431', status: init, memory: 0, processing: 0>
-2022-08-26 13:59:28,415 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40431
-2022-08-26 13:59:28,415 - distributed.core - INFO - Starting established connection
-2022-08-26 13:59:28,415 - distributed.core - INFO - Starting established connection
-2022-08-26 13:59:28,415 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44767
-2022-08-26 13:59:28,415 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:28,416 - distributed.core - INFO - Starting established connection
-2022-08-26 13:59:28,421 - distributed.scheduler - INFO - Receive client connection: Client-f96f5bcb-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:59:28,422 - distributed.core - INFO - Starting established connection
-PASSED2022-08-26 13:59:28,548 - distributed.scheduler - INFO - Remove client Client-f96f5bcb-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:59:28,548 - distributed.scheduler - INFO - Remove client Client-f96f5bcb-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:59:28,548 - distributed.scheduler - INFO - Close client connection: Client-f96f5bcb-2581-11ed-a99d-00d861bc4509
-
-distributed/diagnostics/tests/test_task_stream.py::test_get_task_stream_plot PASSED
-distributed/diagnostics/tests/test_task_stream.py::test_get_task_stream_save 2022-08-26 13:59:29,867 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 13:59:29,869 - distributed.scheduler - INFO - State start
-2022-08-26 13:59:29,872 - distributed.scheduler - INFO - Clear task state
-2022-08-26 13:59:29,872 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:44165
-2022-08-26 13:59:29,872 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 13:59:29,879 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36571
-2022-08-26 13:59:29,879 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36571
-2022-08-26 13:59:29,879 - distributed.worker - INFO -          dashboard at:            127.0.0.1:37653
-2022-08-26 13:59:29,879 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44165
-2022-08-26 13:59:29,879 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:29,879 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 13:59:29,879 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 13:59:29,879 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-65c410ru
-2022-08-26 13:59:29,879 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33181
-2022-08-26 13:59:29,880 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33181
-2022-08-26 13:59:29,880 - distributed.worker - INFO -          dashboard at:            127.0.0.1:32899
-2022-08-26 13:59:29,880 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44165
-2022-08-26 13:59:29,880 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:29,880 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 13:59:29,880 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 13:59:29,880 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-mohn_gs6
-2022-08-26 13:59:29,880 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:29,880 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:30,110 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36571', status: init, memory: 0, processing: 0>
-2022-08-26 13:59:30,325 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36571
-2022-08-26 13:59:30,325 - distributed.core - INFO - Starting established connection
-2022-08-26 13:59:30,325 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44165
-2022-08-26 13:59:30,325 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:30,325 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33181', status: init, memory: 0, processing: 0>
-2022-08-26 13:59:30,326 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33181
-2022-08-26 13:59:30,326 - distributed.core - INFO - Starting established connection
-2022-08-26 13:59:30,326 - distributed.core - INFO - Starting established connection
-2022-08-26 13:59:30,326 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44165
-2022-08-26 13:59:30,326 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:30,327 - distributed.core - INFO - Starting established connection
-2022-08-26 13:59:30,332 - distributed.scheduler - INFO - Receive client connection: Client-fa92ea13-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:59:30,332 - distributed.core - INFO - Starting established connection
-PASSED2022-08-26 13:59:30,466 - distributed.scheduler - INFO - Remove client Client-fa92ea13-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:59:30,466 - distributed.scheduler - INFO - Remove client Client-fa92ea13-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:59:30,466 - distributed.scheduler - INFO - Close client connection: Client-fa92ea13-2581-11ed-a99d-00d861bc4509
-
-distributed/diagnostics/tests/test_widgets.py::test_progressbar_widget 2022-08-26 13:59:30,553 - distributed.core - ERROR - 
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/utils.py", line 799, in wrapper
-    return await func(*args, **kwargs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/scheduler.py", line 6368, in feed
-    await asyncio.sleep(interval)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/tasks.py", line 605, in sleep
-    return await future
-asyncio.exceptions.CancelledError
-2022-08-26 13:59:30,553 - distributed.core - ERROR - 
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/utils.py", line 799, in wrapper
-    return await func(*args, **kwargs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/scheduler.py", line 6368, in feed
-    await asyncio.sleep(interval)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/tasks.py", line 605, in sleep
-    return await future
-asyncio.exceptions.CancelledError
-PASSED
-distributed/diagnostics/tests/test_widgets.py::test_multi_progressbar_widget 2022-08-26 13:59:30,793 - distributed.worker - WARNING - Compute Failed
-Key:       throws-544204284805c10f2a9422dc61f16006
-Function:  throws
-args:      (2)
-kwargs:    {}
-Exception: "RuntimeError('hello!')"
-
-2022-08-26 13:59:30,818 - distributed.core - ERROR - 
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/utils.py", line 799, in wrapper
-    return await func(*args, **kwargs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/scheduler.py", line 6368, in feed
-    await asyncio.sleep(interval)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/tasks.py", line 605, in sleep
-    return await future
-asyncio.exceptions.CancelledError
-PASSED
-distributed/diagnostics/tests/test_widgets.py::test_multi_progressbar_widget_after_close 2022-08-26 13:59:31,043 - distributed.worker - WARNING - Compute Failed
-Key:       e
-Function:  throws
-args:      (2)
-kwargs:    {}
-Exception: "RuntimeError('hello!')"
-
-2022-08-26 13:59:31,121 - distributed.core - ERROR - 
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/utils.py", line 799, in wrapper
-    return await func(*args, **kwargs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/scheduler.py", line 6368, in feed
-    await asyncio.sleep(interval)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/tasks.py", line 605, in sleep
-    return await future
-asyncio.exceptions.CancelledError
-PASSED
-distributed/diagnostics/tests/test_widgets.py::test_values 2022-08-26 13:59:31,933 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 13:59:31,936 - distributed.scheduler - INFO - State start
-2022-08-26 13:59:31,938 - distributed.scheduler - INFO - Clear task state
-2022-08-26 13:59:31,938 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:34869
-2022-08-26 13:59:31,938 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 13:59:31,946 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43213
-2022-08-26 13:59:31,946 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43213
-2022-08-26 13:59:31,946 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34221
-2022-08-26 13:59:31,946 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34869
-2022-08-26 13:59:31,946 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:31,946 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 13:59:31,946 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 13:59:31,946 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-tbgcvjvh
-2022-08-26 13:59:31,946 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:31,946 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34871
-2022-08-26 13:59:31,946 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34871
-2022-08-26 13:59:31,946 - distributed.worker - INFO -          dashboard at:            127.0.0.1:37389
-2022-08-26 13:59:31,946 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34869
-2022-08-26 13:59:31,946 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:31,946 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 13:59:31,946 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 13:59:31,946 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-55n7xvi6
-2022-08-26 13:59:31,946 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:32,192 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:34871', status: init, memory: 0, processing: 0>
-2022-08-26 13:59:32,417 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:34871
-2022-08-26 13:59:32,417 - distributed.core - INFO - Starting established connection
-2022-08-26 13:59:32,417 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34869
-2022-08-26 13:59:32,417 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:32,418 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:43213', status: init, memory: 0, processing: 0>
-2022-08-26 13:59:32,418 - distributed.core - INFO - Starting established connection
-2022-08-26 13:59:32,418 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:43213
-2022-08-26 13:59:32,418 - distributed.core - INFO - Starting established connection
-2022-08-26 13:59:32,418 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34869
-2022-08-26 13:59:32,419 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:32,419 - distributed.core - INFO - Starting established connection
-2022-08-26 13:59:32,424 - distributed.scheduler - INFO - Receive client connection: Client-fbd226d2-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:59:32,424 - distributed.core - INFO - Starting established connection
-2022-08-26 13:59:32,538 - distributed.worker - WARNING - Compute Failed
-Key:       throws-e7547614a2ac592d36b4a0b751337778
-Function:  throws
-args:      (1)
-kwargs:    {}
-Exception: "RuntimeError('hello!')"
-
-PASSED2022-08-26 13:59:32,565 - distributed.scheduler - INFO - Remove client Client-fbd226d2-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:59:32,565 - distributed.scheduler - INFO - Remove client Client-fbd226d2-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:59:32,565 - distributed.scheduler - INFO - Close client connection: Client-fbd226d2-2581-11ed-a99d-00d861bc4509
-
-distributed/diagnostics/tests/test_widgets.py::test_progressbar_done 2022-08-26 13:59:33,218 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 13:59:33,221 - distributed.scheduler - INFO - State start
-2022-08-26 13:59:33,223 - distributed.scheduler - INFO - Clear task state
-2022-08-26 13:59:33,223 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:42009
-2022-08-26 13:59:33,223 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 13:59:33,226 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-tbgcvjvh', purging
-2022-08-26 13:59:33,226 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-55n7xvi6', purging
-2022-08-26 13:59:33,231 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:46697
-2022-08-26 13:59:33,231 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:46697
-2022-08-26 13:59:33,231 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43421
-2022-08-26 13:59:33,231 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:42009
-2022-08-26 13:59:33,231 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:33,231 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 13:59:33,231 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 13:59:33,231 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-gus3ah2q
-2022-08-26 13:59:33,231 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:33,231 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39359
-2022-08-26 13:59:33,231 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39359
-2022-08-26 13:59:33,232 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33057
-2022-08-26 13:59:33,232 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:42009
-2022-08-26 13:59:33,232 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:33,232 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 13:59:33,232 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 13:59:33,232 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ajb9i74g
-2022-08-26 13:59:33,232 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:33,477 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39359', status: init, memory: 0, processing: 0>
-2022-08-26 13:59:33,696 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39359
-2022-08-26 13:59:33,696 - distributed.core - INFO - Starting established connection
-2022-08-26 13:59:33,696 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:42009
-2022-08-26 13:59:33,697 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:33,697 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:46697', status: init, memory: 0, processing: 0>
-2022-08-26 13:59:33,697 - distributed.core - INFO - Starting established connection
-2022-08-26 13:59:33,697 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:46697
-2022-08-26 13:59:33,697 - distributed.core - INFO - Starting established connection
-2022-08-26 13:59:33,698 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:42009
-2022-08-26 13:59:33,698 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:33,699 - distributed.core - INFO - Starting established connection
-2022-08-26 13:59:33,703 - distributed.scheduler - INFO - Receive client connection: Client-fc956955-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:59:33,704 - distributed.core - INFO - Starting established connection
-2022-08-26 13:59:33,808 - distributed.worker - WARNING - Compute Failed
-Key:       throws-4d3ec008fdbd12fad20862f28f312832
-Function:  throws
-args:      ([1, 2, 3, 4, 5])
-kwargs:    {}
-Exception: "RuntimeError('hello!')"
-
-PASSED2022-08-26 13:59:33,826 - distributed.scheduler - INFO - Remove client Client-fc956955-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:59:33,826 - distributed.scheduler - INFO - Remove client Client-fc956955-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:59:33,826 - distributed.scheduler - INFO - Close client connection: Client-fc956955-2581-11ed-a99d-00d861bc4509
-
-distributed/diagnostics/tests/test_widgets.py::test_progressbar_cancel 2022-08-26 13:59:34,481 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 13:59:34,484 - distributed.scheduler - INFO - State start
-2022-08-26 13:59:34,487 - distributed.scheduler - INFO - Clear task state
-2022-08-26 13:59:34,487 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:34587
-2022-08-26 13:59:34,487 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 13:59:34,489 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-ajb9i74g', purging
-2022-08-26 13:59:34,490 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-gus3ah2q', purging
-2022-08-26 13:59:34,495 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35993
-2022-08-26 13:59:34,495 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35993
-2022-08-26 13:59:34,496 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43953
-2022-08-26 13:59:34,496 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34587
-2022-08-26 13:59:34,496 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:34,496 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 13:59:34,496 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:44825
-2022-08-26 13:59:34,496 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:44825
-2022-08-26 13:59:34,496 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 13:59:34,496 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41345
-2022-08-26 13:59:34,496 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-z38w5bro
-2022-08-26 13:59:34,496 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34587
-2022-08-26 13:59:34,496 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:34,496 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:34,496 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 13:59:34,496 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 13:59:34,496 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-jprx_q83
-2022-08-26 13:59:34,496 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:34,725 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:44825', status: init, memory: 0, processing: 0>
-2022-08-26 13:59:34,939 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:44825
-2022-08-26 13:59:34,939 - distributed.core - INFO - Starting established connection
-2022-08-26 13:59:34,939 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34587
-2022-08-26 13:59:34,940 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:34,940 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:35993', status: init, memory: 0, processing: 0>
-2022-08-26 13:59:34,940 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:35993
-2022-08-26 13:59:34,940 - distributed.core - INFO - Starting established connection
-2022-08-26 13:59:34,940 - distributed.core - INFO - Starting established connection
-2022-08-26 13:59:34,941 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34587
-2022-08-26 13:59:34,941 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:34,941 - distributed.core - INFO - Starting established connection
-2022-08-26 13:59:34,947 - distributed.scheduler - INFO - Receive client connection: Client-fd530a76-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:59:34,947 - distributed.core - INFO - Starting established connection
-2022-08-26 13:59:34,962 - distributed.worker - WARNING - Compute Failed
-Key:       lambda-7d68bc85e02b3219f6931f008c335339
-Function:  lambda
-args:      (1)
-kwargs:    {}
-Exception: "TypeError('test_progressbar_cancel.<locals>.<listcomp>.<lambda>() takes 0 positional arguments but 1 was given')"
-
-2022-08-26 13:59:34,962 - distributed.worker - WARNING - Compute Failed
-Key:       lambda-f32b8d55dc16c8de9e82c816e89b6f29
-Function:  lambda
-args:      (0)
-kwargs:    {}
-Exception: "TypeError('test_progressbar_cancel.<locals>.<listcomp>.<lambda>() takes 0 positional arguments but 1 was given')"
-
-2022-08-26 13:59:34,963 - distributed.worker - WARNING - Compute Failed
-Key:       lambda-eab82a9635e190f72879939438605cfa
-Function:  lambda
-args:      (3)
-kwargs:    {}
-Exception: "TypeError('test_progressbar_cancel.<locals>.<listcomp>.<lambda>() takes 0 positional arguments but 1 was given')"
-
-2022-08-26 13:59:34,963 - distributed.worker - WARNING - Compute Failed
-Key:       lambda-8876180dc941b66f125d911ac0eed055
-Function:  lambda
-args:      (4)
-kwargs:    {}
-Exception: "TypeError('test_progressbar_cancel.<locals>.<listcomp>.<lambda>() takes 0 positional arguments but 1 was given')"
-
-2022-08-26 13:59:34,964 - distributed.worker - WARNING - Compute Failed
-Key:       lambda-04996cf2608f614a5188fee29328aaac
-Function:  lambda
-args:      (2)
-kwargs:    {}
-Exception: "TypeError('test_progressbar_cancel.<locals>.<listcomp>.<lambda>() takes 0 positional arguments but 1 was given')"
-
-2022-08-26 13:59:35,009 - distributed.scheduler - INFO - Client Client-fd530a76-2581-11ed-a99d-00d861bc4509 requests to cancel 1 keys
-2022-08-26 13:59:35,009 - distributed.scheduler - INFO - Scheduler cancels key lambda-8876180dc941b66f125d911ac0eed055.  Force=False
-PASSED2022-08-26 13:59:35,021 - distributed.scheduler - INFO - Remove client Client-fd530a76-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:59:35,022 - distributed.scheduler - INFO - Remove client Client-fd530a76-2581-11ed-a99d-00d861bc4509
-
-distributed/diagnostics/tests/test_widgets.py::test_multibar_complete 2022-08-26 13:59:35,091 - distributed.worker - WARNING - Compute Failed
-Key:       e
-Function:  throws
-args:      (2)
-kwargs:    {}
-Exception: "RuntimeError('hello!')"
-
-2022-08-26 13:59:35,166 - distributed.core - ERROR - 
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/utils.py", line 799, in wrapper
-    return await func(*args, **kwargs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/scheduler.py", line 6368, in feed
-    await asyncio.sleep(interval)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/tasks.py", line 605, in sleep
-    return await future
-asyncio.exceptions.CancelledError
-PASSED
-distributed/diagnostics/tests/test_widgets.py::test_fast 2022-08-26 13:59:35,984 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 13:59:35,987 - distributed.scheduler - INFO - State start
-2022-08-26 13:59:35,989 - distributed.scheduler - INFO - Clear task state
-2022-08-26 13:59:35,989 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:40467
-2022-08-26 13:59:35,990 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 13:59:35,997 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34085
-2022-08-26 13:59:35,997 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34085
-2022-08-26 13:59:35,997 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42529
-2022-08-26 13:59:35,997 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:40467
-2022-08-26 13:59:35,997 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:35,997 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 13:59:35,997 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 13:59:35,997 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-1o2_hziy
-2022-08-26 13:59:35,997 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:35,997 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35943
-2022-08-26 13:59:35,997 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35943
-2022-08-26 13:59:35,997 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45709
-2022-08-26 13:59:35,997 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:40467
-2022-08-26 13:59:35,997 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:35,997 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 13:59:35,997 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 13:59:35,998 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-e30cdr_y
-2022-08-26 13:59:35,998 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:36,225 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:35943', status: init, memory: 0, processing: 0>
-2022-08-26 13:59:36,441 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:35943
-2022-08-26 13:59:36,441 - distributed.core - INFO - Starting established connection
-2022-08-26 13:59:36,441 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:40467
-2022-08-26 13:59:36,441 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:36,442 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:34085', status: init, memory: 0, processing: 0>
-2022-08-26 13:59:36,442 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:34085
-2022-08-26 13:59:36,442 - distributed.core - INFO - Starting established connection
-2022-08-26 13:59:36,443 - distributed.core - INFO - Starting established connection
-2022-08-26 13:59:36,442 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:40467
-2022-08-26 13:59:36,443 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:36,443 - distributed.core - INFO - Starting established connection
-2022-08-26 13:59:36,449 - distributed.scheduler - INFO - Receive client connection: Client-fe3831ce-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:59:36,449 - distributed.core - INFO - Starting established connection
-PASSED2022-08-26 13:59:36,611 - distributed.scheduler - INFO - Remove client Client-fe3831ce-2581-11ed-a99d-00d861bc4509
-2022-08-26 13:59:36,612 - distributed.scheduler - INFO - Remove client Client-fe3831ce-2581-11ed-a99d-00d861bc4509
-
-distributed/diagnostics/tests/test_widgets.py::test_serializers 2022-08-26 13:59:36,695 - distributed.core - ERROR - 
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/utils.py", line 799, in wrapper
-    return await func(*args, **kwargs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/scheduler.py", line 6368, in feed
-    await asyncio.sleep(interval)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/tasks.py", line 605, in sleep
-    return await future
-asyncio.exceptions.CancelledError
-PASSED
-distributed/diagnostics/tests/test_widgets.py::test_tls 2022-08-26 13:59:36,951 - distributed.core - ERROR - 
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/utils.py", line 799, in wrapper
-    return await func(*args, **kwargs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/scheduler.py", line 6368, in feed
-    await asyncio.sleep(interval)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/tasks.py", line 605, in sleep
-    return await future
-asyncio.exceptions.CancelledError
-PASSED
-distributed/diagnostics/tests/test_worker_plugin.py::test_create_with_client PASSED
-distributed/diagnostics/tests/test_worker_plugin.py::test_remove_with_client PASSED
-distributed/diagnostics/tests/test_worker_plugin.py::test_remove_with_client_raises 2022-08-26 13:59:37,512 - distributed.core - ERROR - Exception while handling op unregister_worker_plugin
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/scheduler.py", line 6648, in unregister_worker_plugin
-    self.worker_plugins.pop(name)
-KeyError: 'bar'
-
-During handling of the above exception, another exception occurred:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 770, in _handle_comm
-    result = await result
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/scheduler.py", line 6650, in unregister_worker_plugin
-    raise ValueError(f"The worker plugin {name} does not exists")
-ValueError: The worker plugin bar does not exists
-PASSED
-distributed/diagnostics/tests/test_worker_plugin.py::test_create_on_construction PASSED
-distributed/diagnostics/tests/test_worker_plugin.py::test_normal_task_transitions_called PASSED
-distributed/diagnostics/tests/test_worker_plugin.py::test_failing_task_transitions_called 2022-08-26 13:59:38,194 - distributed.worker - WARNING - Compute Failed
-Key:       task
-Function:  failing
-args:      (1)
-kwargs:    {}
-Exception: 'Exception()'
-
-PASSED
-distributed/diagnostics/tests/test_worker_plugin.py::test_superseding_task_transitions_called PASSED
-distributed/diagnostics/tests/test_worker_plugin.py::test_dependent_tasks PASSED
-distributed/diagnostics/tests/test_worker_plugin.py::test_empty_plugin PASSED
-distributed/diagnostics/tests/test_worker_plugin.py::test_default_name PASSED
-distributed/diagnostics/tests/test_worker_plugin.py::test_assert_no_warning_no_overload PASSED
-distributed/diagnostics/tests/test_worker_plugin.py::test_WorkerPlugin_overwrite PASSED
-distributed/http/scheduler/tests/test_scheduler_http.py::test_connect PASSED
-distributed/http/scheduler/tests/test_scheduler_http.py::test_worker_404 PASSED
-distributed/http/scheduler/tests/test_scheduler_http.py::test_prefix PASSED
-distributed/http/scheduler/tests/test_scheduler_http.py::test_prometheus PASSED
-distributed/http/scheduler/tests/test_scheduler_http.py::test_prometheus_collect_task_states PASSED
-distributed/http/scheduler/tests/test_scheduler_http.py::test_health PASSED
-distributed/http/scheduler/tests/test_scheduler_http.py::test_sitemap PASSED
-distributed/http/scheduler/tests/test_scheduler_http.py::test_task_page PASSED
-distributed/http/scheduler/tests/test_scheduler_http.py::test_allow_websocket_origin 2022-08-26 13:59:42,221 - bokeh.server.views.ws - ERROR - Refusing websocket connection from Origin 'http://evil.invalid';                       use --allow-websocket-origin=evil.invalid or set BOKEH_ALLOW_WS_ORIGIN=evil.invalid to permit this; currently we allow origins {'good.invalid:80'}
-PASSED
-distributed/http/scheduler/tests/test_scheduler_http.py::test_eventstream PASSED
-distributed/http/scheduler/tests/test_scheduler_http.py::test_api_disabled_by_default PASSED
-distributed/http/scheduler/tests/test_scheduler_http.py::test_api PASSED
-distributed/http/scheduler/tests/test_scheduler_http.py::test_retire_workers PASSED
-distributed/http/scheduler/tests/test_scheduler_http.py::test_get_workers PASSED
-distributed/http/scheduler/tests/test_scheduler_http.py::test_adaptive_target PASSED
-distributed/http/scheduler/tests/test_semaphore_http.py::test_prometheus_collect_task_states PASSED
-distributed/http/tests/test_core.py::test_scheduler PASSED
-distributed/http/tests/test_routing.py::test_basic PASSED
-distributed/http/worker/tests/test_worker_http.py::test_prometheus PASSED
-distributed/http/worker/tests/test_worker_http.py::test_health PASSED
-distributed/http/worker/tests/test_worker_http.py::test_sitemap PASSED
-distributed/protocol/tests/test_arrow.py::test_roundtrip[RecordBatch] PASSED
-distributed/protocol/tests/test_arrow.py::test_roundtrip[Table] PASSED
-distributed/protocol/tests/test_arrow.py::test_scatter[RecordBatch] PASSED
-distributed/protocol/tests/test_arrow.py::test_scatter[Table] PASSED
-distributed/protocol/tests/test_arrow.py::test_dumps_compression PASSED
-distributed/protocol/tests/test_collection.py::test_serialize_collection[y0-dask-tuple] PASSED
-distributed/protocol/tests/test_collection.py::test_serialize_collection[y0-dask-dict] PASSED
-distributed/protocol/tests/test_collection.py::test_serialize_collection[y0-dask-list] PASSED
-distributed/protocol/tests/test_collection.py::test_serialize_collection[y1-pickle-tuple] PASSED
-distributed/protocol/tests/test_collection.py::test_serialize_collection[y1-pickle-dict] PASSED
-distributed/protocol/tests/test_collection.py::test_serialize_collection[y1-pickle-list] PASSED
-distributed/protocol/tests/test_collection.py::test_serialize_collection[None-pickle-tuple] PASSED
-distributed/protocol/tests/test_collection.py::test_serialize_collection[None-pickle-dict] PASSED
-distributed/protocol/tests/test_collection.py::test_serialize_collection[None-pickle-list] PASSED
-distributed/protocol/tests/test_collection.py::test_large_collections_serialize_simply PASSED
-distributed/protocol/tests/test_collection.py::test_nested_types PASSED
-distributed/protocol/tests/test_collection_cuda.py::test_serialize_cupy[50-cuda-tuple] SKIPPED
-distributed/protocol/tests/test_collection_cuda.py::test_serialize_cupy[50-cuda-dict] SKIPPED
-distributed/protocol/tests/test_collection_cuda.py::test_serialize_cupy[None-pickle-tuple] SKIPPED
-distributed/protocol/tests/test_collection_cuda.py::test_serialize_cupy[None-pickle-dict] SKIPPED
-distributed/protocol/tests/test_collection_cuda.py::test_serialize_pandas_pandas[df20-cuda-tuple] SKIPPED
-distributed/protocol/tests/test_collection_cuda.py::test_serialize_pandas_pandas[df20-cuda-dict] SKIPPED
-distributed/protocol/tests/test_collection_cuda.py::test_serialize_pandas_pandas[None-pickle-tuple] SKIPPED
-distributed/protocol/tests/test_collection_cuda.py::test_serialize_pandas_pandas[None-pickle-dict] SKIPPED
-distributed/protocol/tests/test_h5py.py::test_serialize_deserialize_file PASSED
-distributed/protocol/tests/test_h5py.py::test_serialize_deserialize_group PASSED
-distributed/protocol/tests/test_h5py.py::test_serialize_deserialize_dataset PASSED
-distributed/protocol/tests/test_h5py.py::test_raise_error_on_serialize_write_permissions PASSED
-distributed/protocol/tests/test_h5py.py::test_h5py_serialize PASSED
-distributed/protocol/tests/test_h5py.py::test_h5py_serialize_2 PASSED
-distributed/protocol/tests/test_highlevelgraph.py::test_combo_of_layer_types PASSED
-distributed/protocol/tests/test_highlevelgraph.py::test_blockwise PASSED
-distributed/protocol/tests/test_highlevelgraph.py::test_shuffle PASSED
-distributed/protocol/tests/test_highlevelgraph.py::test_array_annotations PASSED
-distributed/protocol/tests/test_highlevelgraph.py::test_dataframe_annotations PASSED
-distributed/protocol/tests/test_numpy.py::test_serialize PASSED
-distributed/protocol/tests/test_numpy.py::test_dumps_serialize_numpy[x0] PASSED
-distributed/protocol/tests/test_numpy.py::test_dumps_serialize_numpy[x1] PASSED
-distributed/protocol/tests/test_numpy.py::test_dumps_serialize_numpy[x2] PASSED
-distributed/protocol/tests/test_numpy.py::test_dumps_serialize_numpy[x3] PASSED
-distributed/protocol/tests/test_numpy.py::test_dumps_serialize_numpy[x4] PASSED
-distributed/protocol/tests/test_numpy.py::test_dumps_serialize_numpy[x5] PASSED
-distributed/protocol/tests/test_numpy.py::test_dumps_serialize_numpy[x6] PASSED
-distributed/protocol/tests/test_numpy.py::test_dumps_serialize_numpy[x7] PASSED
-distributed/protocol/tests/test_numpy.py::test_dumps_serialize_numpy[x8] PASSED
-distributed/protocol/tests/test_numpy.py::test_dumps_serialize_numpy[x9] PASSED
-distributed/protocol/tests/test_numpy.py::test_dumps_serialize_numpy[x10] PASSED
-distributed/protocol/tests/test_numpy.py::test_dumps_serialize_numpy[x11] PASSED
-distributed/protocol/tests/test_numpy.py::test_dumps_serialize_numpy[x12] PASSED
-distributed/protocol/tests/test_numpy.py::test_dumps_serialize_numpy[x13] PASSED
-distributed/protocol/tests/test_numpy.py::test_dumps_serialize_numpy[x14] PASSED
-distributed/protocol/tests/test_numpy.py::test_dumps_serialize_numpy[x15] PASSED
-distributed/protocol/tests/test_numpy.py::test_dumps_serialize_numpy[x16] PASSED
-distributed/protocol/tests/test_numpy.py::test_dumps_serialize_numpy[x17] PASSED
-distributed/protocol/tests/test_numpy.py::test_dumps_serialize_numpy[x18] PASSED
-distributed/protocol/tests/test_numpy.py::test_dumps_serialize_numpy[x19] PASSED
-distributed/protocol/tests/test_numpy.py::test_dumps_serialize_numpy[x20] PASSED
-distributed/protocol/tests/test_numpy.py::test_dumps_serialize_numpy[x21] PASSED
-distributed/protocol/tests/test_numpy.py::test_dumps_serialize_numpy[x22] PASSED
-distributed/protocol/tests/test_numpy.py::test_dumps_serialize_numpy[x23] PASSED
-distributed/protocol/tests/test_numpy.py::test_dumps_serialize_numpy[x24] PASSED
-distributed/protocol/tests/test_numpy.py::test_dumps_serialize_numpy[x25] PASSED
-distributed/protocol/tests/test_numpy.py::test_dumps_serialize_numpy[x26] PASSED
-distributed/protocol/tests/test_numpy.py::test_dumps_serialize_numpy[x27] PASSED
-distributed/protocol/tests/test_numpy.py::test_dumps_serialize_numpy[x28] PASSED
-distributed/protocol/tests/test_numpy.py::test_dumps_serialize_numpy[x29] PASSED
-distributed/protocol/tests/test_numpy.py::test_dumps_serialize_numpy[x30] PASSED
-distributed/protocol/tests/test_numpy.py::test_dumps_serialize_numpy[x31] PASSED
-distributed/protocol/tests/test_numpy.py::test_dumps_serialize_numpy[x32] PASSED
-distributed/protocol/tests/test_numpy.py::test_dumps_serialize_numpy[x33] PASSED
-distributed/protocol/tests/test_numpy.py::test_dumps_serialize_numpy[x34] PASSED
-distributed/protocol/tests/test_numpy.py::test_dumps_serialize_numpy[x35] PASSED
-distributed/protocol/tests/test_numpy.py::test_dumps_numpy_writable[True] PASSED
-distributed/protocol/tests/test_numpy.py::test_dumps_numpy_writable[False] PASSED
-distributed/protocol/tests/test_numpy.py::test_serialize_numpy_ma_masked_array[x0] PASSED
-distributed/protocol/tests/test_numpy.py::test_serialize_numpy_ma_masked_array[x1] PASSED
-distributed/protocol/tests/test_numpy.py::test_serialize_numpy_ma_masked_array[x2] PASSED
-distributed/protocol/tests/test_numpy.py::test_serialize_numpy_ma_masked_array[x3] PASSED
-distributed/protocol/tests/test_numpy.py::test_serialize_numpy_ma_masked_array[x4] PASSED
-distributed/protocol/tests/test_numpy.py::test_serialize_numpy_ma_masked_array[x5] PASSED
-distributed/protocol/tests/test_numpy.py::test_serialize_numpy_ma_masked PASSED
-distributed/protocol/tests/test_numpy.py::test_dumps_serialize_numpy_custom_dtype SKIPPED
-distributed/protocol/tests/test_numpy.py::test_memmap PASSED
-distributed/protocol/tests/test_numpy.py::test_dumps_serialize_numpy_large SKIPPED
-distributed/protocol/tests/test_numpy.py::test_itemsize[f8-8] PASSED
-distributed/protocol/tests/test_numpy.py::test_itemsize[i4-4] PASSED
-distributed/protocol/tests/test_numpy.py::test_itemsize[c16-16] PASSED
-distributed/protocol/tests/test_numpy.py::test_itemsize[b-1] PASSED
-distributed/protocol/tests/test_numpy.py::test_itemsize[S3-3] PASSED
-distributed/protocol/tests/test_numpy.py::test_itemsize[M8[us]-8] PASSED
-distributed/protocol/tests/test_numpy.py::test_itemsize[M8[s]-8] PASSED
-distributed/protocol/tests/test_numpy.py::test_itemsize[U3-12] PASSED
-distributed/protocol/tests/test_numpy.py::test_itemsize[dt8-12] PASSED
-distributed/protocol/tests/test_numpy.py::test_itemsize[dt9-4] PASSED
-distributed/protocol/tests/test_numpy.py::test_itemsize[dt10-8] PASSED
-distributed/protocol/tests/test_numpy.py::test_itemsize[dt11-88] PASSED
-distributed/protocol/tests/test_numpy.py::test_itemsize[dt12-8] PASSED
-distributed/protocol/tests/test_numpy.py::test_compress_numpy PASSED
-distributed/protocol/tests/test_numpy.py::test_compress_memoryview PASSED
-distributed/protocol/tests/test_numpy.py::test_dumps_large PASSED
-distributed/protocol/tests/test_numpy.py::test_zero_strided_numpy_array[True-x0] PASSED
-distributed/protocol/tests/test_numpy.py::test_zero_strided_numpy_array[True-x1] PASSED
-distributed/protocol/tests/test_numpy.py::test_zero_strided_numpy_array[True-x2] PASSED
-distributed/protocol/tests/test_numpy.py::test_zero_strided_numpy_array[True-x3] PASSED
-distributed/protocol/tests/test_numpy.py::test_zero_strided_numpy_array[False-x0] PASSED
-distributed/protocol/tests/test_numpy.py::test_zero_strided_numpy_array[False-x1] PASSED
-distributed/protocol/tests/test_numpy.py::test_zero_strided_numpy_array[False-x2] PASSED
-distributed/protocol/tests/test_numpy.py::test_zero_strided_numpy_array[False-x3] PASSED
-distributed/protocol/tests/test_numpy.py::test_non_zero_strided_array PASSED
-distributed/protocol/tests/test_numpy.py::test_serialize_writeable_array_readonly_base_object PASSED
-distributed/protocol/tests/test_pandas.py::test_dumps_serialize_pandas[df0] PASSED
-distributed/protocol/tests/test_pandas.py::test_dumps_serialize_pandas[df1] PASSED
-distributed/protocol/tests/test_pandas.py::test_dumps_serialize_pandas[df2] PASSED
-distributed/protocol/tests/test_pandas.py::test_dumps_serialize_pandas[df3] PASSED
-distributed/protocol/tests/test_pandas.py::test_dumps_serialize_pandas[df4] PASSED
-distributed/protocol/tests/test_pandas.py::test_dumps_serialize_pandas[df5] PASSED
-distributed/protocol/tests/test_pandas.py::test_dumps_serialize_pandas[df6] PASSED
-distributed/protocol/tests/test_pandas.py::test_dumps_serialize_pandas[df7] PASSED
-distributed/protocol/tests/test_pandas.py::test_dumps_serialize_pandas[df8] PASSED
-distributed/protocol/tests/test_pandas.py::test_dumps_serialize_pandas[df9] PASSED
-distributed/protocol/tests/test_pandas.py::test_dumps_serialize_pandas[df10] PASSED
-distributed/protocol/tests/test_pandas.py::test_dumps_serialize_pandas[df11] PASSED
-distributed/protocol/tests/test_pandas.py::test_dumps_serialize_pandas[df12] PASSED
-distributed/protocol/tests/test_pandas.py::test_dumps_serialize_pandas[df13] PASSED
-distributed/protocol/tests/test_pandas.py::test_dumps_serialize_pandas[df14] PASSED
-distributed/protocol/tests/test_pandas.py::test_dumps_serialize_pandas[df15] PASSED
-distributed/protocol/tests/test_pandas.py::test_dumps_serialize_pandas[df16] PASSED
-distributed/protocol/tests/test_pandas.py::test_dumps_serialize_pandas[df17] PASSED
-distributed/protocol/tests/test_pandas.py::test_dumps_serialize_pandas[df18] PASSED
-distributed/protocol/tests/test_pandas.py::test_dumps_serialize_pandas[df19] PASSED
-distributed/protocol/tests/test_pandas.py::test_dumps_serialize_pandas[df20] PASSED
-distributed/protocol/tests/test_pandas.py::test_dumps_serialize_pandas[df21] PASSED
-distributed/protocol/tests/test_pandas.py::test_dumps_serialize_pandas[df22] PASSED
-distributed/protocol/tests/test_pandas.py::test_dumps_serialize_pandas[df23] PASSED
-distributed/protocol/tests/test_pandas.py::test_dumps_serialize_pandas[df24] PASSED
-distributed/protocol/tests/test_pandas.py::test_dumps_serialize_pandas[df25] PASSED
-distributed/protocol/tests/test_pandas.py::test_dumps_pandas_writable PASSED
-distributed/protocol/tests/test_pickle.py::test_pickle_data[4] PASSED
-distributed/protocol/tests/test_pickle.py::test_pickle_data[5] PASSED
-distributed/protocol/tests/test_pickle.py::test_pickle_out_of_band[4] PASSED
-distributed/protocol/tests/test_pickle.py::test_pickle_out_of_band[5] PASSED
-distributed/protocol/tests/test_pickle.py::test_pickle_empty[4] PASSED
-distributed/protocol/tests/test_pickle.py::test_pickle_empty[5] PASSED
-distributed/protocol/tests/test_pickle.py::test_pickle_numpy[4] PASSED
-distributed/protocol/tests/test_pickle.py::test_pickle_numpy[5] PASSED
-distributed/protocol/tests/test_pickle.py::test_pickle_functions[4] PASSED
-distributed/protocol/tests/test_pickle.py::test_pickle_functions[5] PASSED
-distributed/protocol/tests/test_pickle.py::test_pickle_by_value_when_registered PASSED
-distributed/protocol/tests/test_protocol.py::test_protocol PASSED
-distributed/protocol/tests/test_protocol.py::test_compression_config[auto-lz4] PASSED
-distributed/protocol/tests/test_protocol.py::test_compression_config[None-None] PASSED
-distributed/protocol/tests/test_protocol.py::test_compression_config[zlib-zlib] PASSED
-distributed/protocol/tests/test_protocol.py::test_compression_config[foo-ValueError] PASSED
-distributed/protocol/tests/test_protocol.py::test_compression_1 PASSED
-distributed/protocol/tests/test_protocol.py::test_compression_2 PASSED
-distributed/protocol/tests/test_protocol.py::test_compression_3 PASSED
-distributed/protocol/tests/test_protocol.py::test_compression_without_deserialization PASSED
-distributed/protocol/tests/test_protocol.py::test_small PASSED
-distributed/protocol/tests/test_protocol.py::test_small_and_big PASSED
-distributed/protocol/tests/test_protocol.py::test_maybe_compress[None-None] PASSED
-distributed/protocol/tests/test_protocol.py::test_maybe_compress[zlib-zlib] PASSED
-distributed/protocol/tests/test_protocol.py::test_maybe_compress[lz4-lz4] PASSED
-distributed/protocol/tests/test_protocol.py::test_maybe_compress[zstandard-zstd] PASSED
-distributed/protocol/tests/test_protocol.py::test_maybe_compress_config_default[None-None] PASSED
-distributed/protocol/tests/test_protocol.py::test_maybe_compress_config_default[zlib-zlib] PASSED
-distributed/protocol/tests/test_protocol.py::test_maybe_compress_config_default[lz4-lz4] PASSED
-distributed/protocol/tests/test_protocol.py::test_maybe_compress_config_default[zstandard-zstd] PASSED
-distributed/protocol/tests/test_protocol.py::test_maybe_compress_sample PASSED
-distributed/protocol/tests/test_protocol.py::test_large_bytes PASSED
-distributed/protocol/tests/test_protocol.py::test_large_messages SKIPPED
-distributed/protocol/tests/test_protocol.py::test_large_messages_map PASSED
-distributed/protocol/tests/test_protocol.py::test_loads_deserialize_False PASSED
-distributed/protocol/tests/test_protocol.py::test_loads_without_deserialization_avoids_compression PASSED
-distributed/protocol/tests/test_protocol.py::test_dumps_loads_Serialize PASSED
-distributed/protocol/tests/test_protocol.py::test_dumps_loads_Serialized PASSED
-distributed/protocol/tests/test_protocol.py::test_maybe_compress_memoryviews PASSED
-distributed/protocol/tests/test_protocol.py::test_preserve_header[serializers0] PASSED
-distributed/protocol/tests/test_protocol.py::test_preserve_header[serializers1] PASSED
-distributed/protocol/tests/test_protocol_utils.py::test_pack_frames PASSED
-distributed/protocol/tests/test_protocol_utils.py::TestMergeMemroyviews::test_empty PASSED
-distributed/protocol/tests/test_protocol_utils.py::TestMergeMemroyviews::test_one PASSED
-distributed/protocol/tests/test_protocol_utils.py::TestMergeMemroyviews::test_parts[slices0] PASSED
-distributed/protocol/tests/test_protocol_utils.py::TestMergeMemroyviews::test_parts[slices1] PASSED
-distributed/protocol/tests/test_protocol_utils.py::TestMergeMemroyviews::test_parts[slices2] PASSED
-distributed/protocol/tests/test_protocol_utils.py::TestMergeMemroyviews::test_parts[slices3] PASSED
-distributed/protocol/tests/test_protocol_utils.py::TestMergeMemroyviews::test_parts[slices4] PASSED
-distributed/protocol/tests/test_protocol_utils.py::TestMergeMemroyviews::test_parts[slices5] PASSED
-distributed/protocol/tests/test_protocol_utils.py::TestMergeMemroyviews::test_parts[slices6] PASSED
-distributed/protocol/tests/test_protocol_utils.py::TestMergeMemroyviews::test_parts[slices7] PASSED
-distributed/protocol/tests/test_protocol_utils.py::TestMergeMemroyviews::test_parts[slices8] PASSED
-distributed/protocol/tests/test_protocol_utils.py::TestMergeMemroyviews::test_parts[slices9] PASSED
-distributed/protocol/tests/test_protocol_utils.py::TestMergeMemroyviews::test_readonly_buffer PASSED
-distributed/protocol/tests/test_protocol_utils.py::TestMergeMemroyviews::test_catch_non_memoryview PASSED
-distributed/protocol/tests/test_protocol_utils.py::TestMergeMemroyviews::test_catch_gaps[slices0] PASSED
-distributed/protocol/tests/test_protocol_utils.py::TestMergeMemroyviews::test_catch_gaps[slices1] PASSED
-distributed/protocol/tests/test_protocol_utils.py::TestMergeMemroyviews::test_catch_gaps[slices2] PASSED
-distributed/protocol/tests/test_protocol_utils.py::TestMergeMemroyviews::test_catch_different_buffer PASSED
-distributed/protocol/tests/test_protocol_utils.py::TestMergeMemroyviews::test_catch_different_non_contiguous PASSED
-distributed/protocol/tests/test_protocol_utils.py::TestMergeMemroyviews::test_catch_multidimensional PASSED
-distributed/protocol/tests/test_protocol_utils.py::TestMergeMemroyviews::test_catch_different_formats PASSED
-distributed/protocol/tests/test_scipy.py::test_serialize_scipy_sparse[dtype0-bsr_matrix] PASSED
-distributed/protocol/tests/test_scipy.py::test_serialize_scipy_sparse[dtype0-coo_matrix] PASSED
-distributed/protocol/tests/test_scipy.py::test_serialize_scipy_sparse[dtype0-csc_matrix] PASSED
-distributed/protocol/tests/test_scipy.py::test_serialize_scipy_sparse[dtype0-csr_matrix] PASSED
-distributed/protocol/tests/test_scipy.py::test_serialize_scipy_sparse[dtype0-dia_matrix] PASSED
-distributed/protocol/tests/test_scipy.py::test_serialize_scipy_sparse[dtype0-dok_matrix] PASSED
-distributed/protocol/tests/test_scipy.py::test_serialize_scipy_sparse[dtype0-lil_matrix] PASSED
-distributed/protocol/tests/test_scipy.py::test_serialize_scipy_sparse[dtype1-bsr_matrix] PASSED
-distributed/protocol/tests/test_scipy.py::test_serialize_scipy_sparse[dtype1-coo_matrix] PASSED
-distributed/protocol/tests/test_scipy.py::test_serialize_scipy_sparse[dtype1-csc_matrix] PASSED
-distributed/protocol/tests/test_scipy.py::test_serialize_scipy_sparse[dtype1-csr_matrix] PASSED
-distributed/protocol/tests/test_scipy.py::test_serialize_scipy_sparse[dtype1-dia_matrix] PASSED
-distributed/protocol/tests/test_scipy.py::test_serialize_scipy_sparse[dtype1-dok_matrix] PASSED
-distributed/protocol/tests/test_scipy.py::test_serialize_scipy_sparse[dtype1-lil_matrix] PASSED
-distributed/protocol/tests/test_scipy.py::test_serialize_scipy_sparse[dtype2-bsr_matrix] PASSED
-distributed/protocol/tests/test_scipy.py::test_serialize_scipy_sparse[dtype2-coo_matrix] PASSED
-distributed/protocol/tests/test_scipy.py::test_serialize_scipy_sparse[dtype2-csc_matrix] PASSED
-distributed/protocol/tests/test_scipy.py::test_serialize_scipy_sparse[dtype2-csr_matrix] PASSED
-distributed/protocol/tests/test_scipy.py::test_serialize_scipy_sparse[dtype2-dia_matrix] PASSED
-distributed/protocol/tests/test_scipy.py::test_serialize_scipy_sparse[dtype2-dok_matrix] PASSED
-distributed/protocol/tests/test_scipy.py::test_serialize_scipy_sparse[dtype2-lil_matrix] PASSED
-distributed/protocol/tests/test_scipy.py::test_serialize_scipy_sparse[dtype3-bsr_matrix] PASSED
-distributed/protocol/tests/test_scipy.py::test_serialize_scipy_sparse[dtype3-coo_matrix] PASSED
-distributed/protocol/tests/test_scipy.py::test_serialize_scipy_sparse[dtype3-csc_matrix] PASSED
-distributed/protocol/tests/test_scipy.py::test_serialize_scipy_sparse[dtype3-csr_matrix] PASSED
-distributed/protocol/tests/test_scipy.py::test_serialize_scipy_sparse[dtype3-dia_matrix] PASSED
-distributed/protocol/tests/test_scipy.py::test_serialize_scipy_sparse[dtype3-dok_matrix] PASSED
-distributed/protocol/tests/test_scipy.py::test_serialize_scipy_sparse[dtype3-lil_matrix] PASSED
-distributed/protocol/tests/test_serialize.py::test_dumps_serialize PASSED
-distributed/protocol/tests/test_serialize.py::test_serialize_bytestrings PASSED
-distributed/protocol/tests/test_serialize.py::test_serialize_empty_array PASSED
-distributed/protocol/tests/test_serialize.py::test_serialize_arrays[b] PASSED
-distributed/protocol/tests/test_serialize.py::test_serialize_arrays[B] PASSED
-distributed/protocol/tests/test_serialize.py::test_serialize_arrays[h] PASSED
-distributed/protocol/tests/test_serialize.py::test_serialize_arrays[H] PASSED
-distributed/protocol/tests/test_serialize.py::test_serialize_arrays[i] PASSED
-distributed/protocol/tests/test_serialize.py::test_serialize_arrays[I] PASSED
-distributed/protocol/tests/test_serialize.py::test_serialize_arrays[l] PASSED
-distributed/protocol/tests/test_serialize.py::test_serialize_arrays[L] PASSED
-distributed/protocol/tests/test_serialize.py::test_serialize_arrays[q] PASSED
-distributed/protocol/tests/test_serialize.py::test_serialize_arrays[Q] PASSED
-distributed/protocol/tests/test_serialize.py::test_serialize_arrays[f] PASSED
-distributed/protocol/tests/test_serialize.py::test_serialize_arrays[d] PASSED
-distributed/protocol/tests/test_serialize.py::test_Serialize PASSED
-distributed/protocol/tests/test_serialize.py::test_Serialized PASSED
-distributed/protocol/tests/test_serialize.py::test_nested_deserialize PASSED
-distributed/protocol/tests/test_serialize.py::test_serialize_iterate_collection PASSED
-distributed/protocol/tests/test_serialize.py::test_object_in_graph PASSED
-distributed/protocol/tests/test_serialize.py::test_scatter PASSED
-distributed/protocol/tests/test_serialize.py::test_inter_worker_comms PASSED
-distributed/protocol/tests/test_serialize.py::test_empty PASSED
-distributed/protocol/tests/test_serialize.py::test_empty_loads PASSED
-distributed/protocol/tests/test_serialize.py::test_empty_loads_deep PASSED
-distributed/protocol/tests/test_serialize.py::test_serialize_bytes[kwargs0] PASSED
-distributed/protocol/tests/test_serialize.py::test_serialize_bytes[kwargs1] PASSED
-distributed/protocol/tests/test_serialize.py::test_serialize_list_compress PASSED
-distributed/protocol/tests/test_serialize.py::test_malicious_exception PASSED
-distributed/protocol/tests/test_serialize.py::test_errors PASSED
-distributed/protocol/tests/test_serialize.py::test_err_on_bad_deserializer 2022-08-26 13:59:52,720 - distributed.protocol.core - CRITICAL - Failed to deserialize
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/core.py", line 158, in loads
-    return msgpack.loads(
-  File "msgpack/_unpacker.pyx", line 194, in msgpack._cmsgpack.unpackb
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/core.py", line 138, in _decode_default
-    return merge_and_deserialize(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/serialize.py", line 497, in merge_and_deserialize
-    return deserialize(header, merged_frames, deserializers=deserializers)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/serialize.py", line 421, in deserialize
-    raise TypeError(
-TypeError: Data serialized with pickle but only able to deserialize data with ['msgpack']
-PASSED
-distributed/protocol/tests/test_serialize.py::test_context_specific_serialization PASSED
-distributed/protocol/tests/test_serialize.py::test_context_specific_serialization_class PASSED
-distributed/protocol/tests/test_serialize.py::test_serialize_raises PASSED
-distributed/protocol/tests/test_serialize.py::test_profile_nested_sizeof PASSED
-distributed/protocol/tests/test_serialize.py::test_different_compression_families PASSED
-distributed/protocol/tests/test_serialize.py::test_frame_split [b'\x81\xa1x\x81\xae__Serialized__\x01', b'\x88\xaasub-header\x80\xa4type\xa5bytes\xaftype-serialized\xc4!\x80\x04\x95\x16\x00\x00\x00\x00\x00\x00\x00\x8c\x08builtins\x94\x8c\x05bytes\x94\x93\x94.\xaaserializer\xa4dask\xb4split-num-sub-frames\x91\x03\xadsplit-offsets\x91\x00\xabcompression\x93\xa3lz4\xa3lz4\xa3lz4\xaenum-sub-frames\x03', b'\x00\x000\x00\x8f1234abcd\x08\x00\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xf
f\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\
 xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\x
 ff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xf
 f\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff
 \xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\
 xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\x
 ff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xf
 f\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff
 \xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\
 xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\x
 ff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xf
 f\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff
 \xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\
 xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\x
 ff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xf
 f\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff
 \xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\
 xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\x
 ff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xf
 f\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff
 \xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\
 xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\x
 ff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xf
 f\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff
 \xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\
 xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\x
 ff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xf
 f\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff
 \xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\
 xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\x
 ff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xf
 f\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff
 \xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\
 xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\x
 ff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xf
 f\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff
 \xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\
 xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\x
 ff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xf
 f\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff
 \xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\
 xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\x
 ff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xf
 f\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff
 \xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\
 xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\x
 ff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xf
 f\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff
 \xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\x10P4abcd', b'\x00\x000\x00\x8f1234abcd\x08\x00\xff\xff\xff\
 xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\x
 ff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xf
 f\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff
 \xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\
 xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\x
 ff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xf
 f\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff
 \xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\
 xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\x
 ff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xf
 f\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff
 \xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\
 xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\x
 ff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xf
 f\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff
 \xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\
 xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\x
 ff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xf
 f\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff
 \xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\
 xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\x
 ff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xf
 f\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff
 \xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\
 xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\x
 ff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xf
 f\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff
 \xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\
 xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\x
 ff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xf
 f\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff
 \xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\
 xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\x
 ff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xf
 f\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff
 \xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\
 xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\x
 ff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xf
 f\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff
 \xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\
 xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\x
 ff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xf
 f\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff
 \xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\
 xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\x
 ff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xf
 f\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff
 \xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\
 xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\x
 ff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\x10P4abcd', b'\x00\x00 \x00\x8f1234abcd\x08\x00\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xf
 f\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff
 \xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\
 xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\x
 ff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xf
 f\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff
 \xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\
 xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\x
 ff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xf
 f\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff
 \xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\
 xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\x
 ff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xf
 f\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff
 \xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\
 xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\x
 ff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xf
 f\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff
 \xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\
 xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\x
 ff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xf
 f\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff
 \xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\
 xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\x
 ff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xf
 f\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff
 \xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\
 xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\x
 ff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xf
 f\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff
 \xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\
 xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\x
 ff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xf
 f\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\x00P4abcd']
-PASSED
-distributed/protocol/tests/test_serialize.py::test_check_dask_serializable[data0-False] PASSED
-distributed/protocol/tests/test_serialize.py::test_check_dask_serializable[data1-False] PASSED
-distributed/protocol/tests/test_serialize.py::test_check_dask_serializable[data2-False] PASSED
-distributed/protocol/tests/test_serialize.py::test_check_dask_serializable[data3-False] PASSED
-distributed/protocol/tests/test_serialize.py::test_check_dask_serializable[data4-False] PASSED
-distributed/protocol/tests/test_serialize.py::test_check_dask_serializable[data5-True] PASSED
-distributed/protocol/tests/test_serialize.py::test_check_dask_serializable[data6-True] PASSED
-distributed/protocol/tests/test_serialize.py::test_check_dask_serializable[data7-True] XFAIL
-distributed/protocol/tests/test_serialize.py::test_check_dask_serializable[data8-True] PASSED
-distributed/protocol/tests/test_serialize.py::test_check_dask_serializable[data9-True] PASSED
-distributed/protocol/tests/test_serialize.py::test_check_dask_serializable[data10-True] PASSED
-distributed/protocol/tests/test_serialize.py::test_check_dask_serializable[data11-True] PASSED
-distributed/protocol/tests/test_serialize.py::test_check_dask_serializable[data12-True] PASSED
-distributed/protocol/tests/test_serialize.py::test_serialize_lists[serializers0] PASSED
-distributed/protocol/tests/test_serialize.py::test_serialize_lists[serializers1] PASSED
-distributed/protocol/tests/test_serialize.py::test_serialize_lists[serializers2] PASSED
-distributed/protocol/tests/test_serialize.py::test_serialize_lists[serializers3] PASSED
-distributed/protocol/tests/test_serialize.py::test_deser_memoryview[data_in0] PASSED
-distributed/protocol/tests/test_serialize.py::test_deser_memoryview[data_in1] PASSED
-distributed/protocol/tests/test_serialize.py::test_ser_memoryview_object PASSED
-distributed/protocol/tests/test_serialize.py::test_ser_empty_1d_memoryview PASSED
-distributed/protocol/tests/test_serialize.py::test_ser_empty_nd_memoryview PASSED
-distributed/protocol/tests/test_serialize.py::test_large_pickled_object 2022-08-26 13:59:53,741 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:44789
-2022-08-26 13:59:53,741 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:44789
-2022-08-26 13:59:53,741 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 13:59:53,741 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41641
-2022-08-26 13:59:53,741 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36729
-2022-08-26 13:59:53,741 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:53,741 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 13:59:53,741 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 13:59:53,741 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-tcbzhged
-2022-08-26 13:59:53,741 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:53,743 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42821
-2022-08-26 13:59:53,743 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42821
-2022-08-26 13:59:53,743 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 13:59:53,743 - distributed.worker - INFO -          dashboard at:            127.0.0.1:37907
-2022-08-26 13:59:53,743 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36729
-2022-08-26 13:59:53,743 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:53,743 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 13:59:53,743 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 13:59:53,743 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-a7fmisvx
-2022-08-26 13:59:53,743 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:53,962 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36729
-2022-08-26 13:59:53,962 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:53,962 - distributed.core - INFO - Starting established connection
-2022-08-26 13:59:53,971 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36729
-2022-08-26 13:59:53,971 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:53,971 - distributed.core - INFO - Starting established connection
-2022-08-26 13:59:54,612 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:44789
-2022-08-26 13:59:54,612 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42821
-2022-08-26 13:59:54,613 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-aa719ac0-11d8-48e2-9d1a-e275b24be234 Address tcp://127.0.0.1:42821 Status: Status.closing
-2022-08-26 13:59:54,613 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-da6b212f-9b43-4d84-8239-2f953eb071ad Address tcp://127.0.0.1:44789 Status: Status.closing
-PASSED
-distributed/protocol/tests/test_to_pickle.py::test_ToPickle PASSED
-distributed/protocol/tests/test_to_pickle.py::test_non_msgpack_serializable_layer PASSED
-distributed/shuffle/tests/test_graph.py::test_basic 2022-08-26 13:59:55,889 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 13:59:55,891 - distributed.scheduler - INFO - State start
-2022-08-26 13:59:55,893 - distributed.scheduler - INFO - Clear task state
-2022-08-26 13:59:55,894 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:43981
-2022-08-26 13:59:55,894 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 13:59:55,900 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35387
-2022-08-26 13:59:55,900 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35387
-2022-08-26 13:59:55,900 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45059
-2022-08-26 13:59:55,900 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43981
-2022-08-26 13:59:55,900 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:55,900 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 13:59:55,900 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 13:59:55,900 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-bjal1yy3
-2022-08-26 13:59:55,901 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:55,910 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35995
-2022-08-26 13:59:55,910 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35995
-2022-08-26 13:59:55,910 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43663
-2022-08-26 13:59:55,910 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43981
-2022-08-26 13:59:55,910 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:55,910 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 13:59:55,911 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 13:59:55,911 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-6am6jq_k
-2022-08-26 13:59:55,911 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:56,126 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:35995', status: init, memory: 0, processing: 0>
-2022-08-26 13:59:56,337 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:35995
-2022-08-26 13:59:56,337 - distributed.core - INFO - Starting established connection
-2022-08-26 13:59:56,337 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43981
-2022-08-26 13:59:56,337 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:56,337 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:35387', status: init, memory: 0, processing: 0>
-2022-08-26 13:59:56,338 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:35387
-2022-08-26 13:59:56,338 - distributed.core - INFO - Starting established connection
-2022-08-26 13:59:56,338 - distributed.core - INFO - Starting established connection
-2022-08-26 13:59:56,338 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43981
-2022-08-26 13:59:56,339 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:56,339 - distributed.core - INFO - Starting established connection
-2022-08-26 13:59:56,344 - distributed.scheduler - INFO - Receive client connection: Client-0a1417f5-2582-11ed-a99d-00d861bc4509
-2022-08-26 13:59:56,344 - distributed.core - INFO - Starting established connection
-PASSED2022-08-26 13:59:57,057 - distributed.scheduler - INFO - Remove client Client-0a1417f5-2582-11ed-a99d-00d861bc4509
-2022-08-26 13:59:57,057 - distributed.scheduler - INFO - Remove client Client-0a1417f5-2582-11ed-a99d-00d861bc4509
-2022-08-26 13:59:57,057 - distributed.scheduler - INFO - Close client connection: Client-0a1417f5-2582-11ed-a99d-00d861bc4509
-
-distributed/shuffle/tests/test_graph.py::test_basic_state PASSED
-distributed/shuffle/tests/test_graph.py::test_multiple_linear 2022-08-26 13:59:58,448 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 13:59:58,450 - distributed.scheduler - INFO - State start
-2022-08-26 13:59:58,453 - distributed.scheduler - INFO - Clear task state
-2022-08-26 13:59:58,453 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:42649
-2022-08-26 13:59:58,453 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 13:59:58,460 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40235
-2022-08-26 13:59:58,460 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40235
-2022-08-26 13:59:58,460 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40469
-2022-08-26 13:59:58,460 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:42649
-2022-08-26 13:59:58,460 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:58,460 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 13:59:58,460 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 13:59:58,460 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-qraruybd
-2022-08-26 13:59:58,460 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:58,463 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38825
-2022-08-26 13:59:58,463 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38825
-2022-08-26 13:59:58,463 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46311
-2022-08-26 13:59:58,463 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:42649
-2022-08-26 13:59:58,463 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:58,463 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 13:59:58,464 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 13:59:58,464 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ixcti2yp
-2022-08-26 13:59:58,464 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:58,679 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38825', status: init, memory: 0, processing: 0>
-2022-08-26 13:59:58,884 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38825
-2022-08-26 13:59:58,884 - distributed.core - INFO - Starting established connection
-2022-08-26 13:59:58,884 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:42649
-2022-08-26 13:59:58,884 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:58,885 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40235', status: init, memory: 0, processing: 0>
-2022-08-26 13:59:58,885 - distributed.core - INFO - Starting established connection
-2022-08-26 13:59:58,885 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40235
-2022-08-26 13:59:58,885 - distributed.core - INFO - Starting established connection
-2022-08-26 13:59:58,885 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:42649
-2022-08-26 13:59:58,886 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 13:59:58,886 - distributed.core - INFO - Starting established connection
-2022-08-26 13:59:58,891 - distributed.scheduler - INFO - Receive client connection: Client-0b98af5e-2582-11ed-a99d-00d861bc4509
-2022-08-26 13:59:58,891 - distributed.core - INFO - Starting established connection
-PASSED2022-08-26 14:00:00,000 - distributed.scheduler - INFO - Remove client Client-0b98af5e-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:00:00,001 - distributed.scheduler - INFO - Remove client Client-0b98af5e-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:00:00,001 - distributed.scheduler - INFO - Close client connection: Client-0b98af5e-2582-11ed-a99d-00d861bc4509
-
-distributed/shuffle/tests/test_multi_comm.py::test_basic PASSED
-distributed/shuffle/tests/test_multi_comm.py::test_exceptions PASSED
-distributed/shuffle/tests/test_multi_file.py::test_basic PASSED
-distributed/shuffle/tests/test_multi_file.py::test_many[2] PASSED
-distributed/shuffle/tests/test_multi_file.py::test_many[100] PASSED
-distributed/shuffle/tests/test_multi_file.py::test_many[1000] PASSED
-distributed/shuffle/tests/test_multi_file.py::test_exceptions PASSED
-distributed/shuffle/tests/test_shuffle.py::test_basic PASSED
-distributed/shuffle/tests/test_shuffle.py::test_concurrent PASSED
-distributed/shuffle/tests/test_shuffle.py::test_bad_disk 2022-08-26 14:00:02,713 - distributed.worker - WARNING - Compute Failed
-Key:       ('shuffle-unpack-dd01a849e9c5fcd6d305f5cbafdd4e14', 2)
-Function:  shuffle_unpack
-args:      ('dd01a849e9c5fcd6d305f5cbafdd4e14', 2, None)
-kwargs:    {}
-Exception: "FileNotFoundError(2, 'No such file or directory')"
-
-2022-08-26 14:00:02,713 - distributed.worker - WARNING - Compute Failed
-Key:       ('shuffle-unpack-dd01a849e9c5fcd6d305f5cbafdd4e14', 5)
-Function:  shuffle_unpack
-args:      ('dd01a849e9c5fcd6d305f5cbafdd4e14', 5, None)
-kwargs:    {}
-Exception: "FileNotFoundError(2, 'No such file or directory')"
-
-2022-08-26 14:00:02,717 - distributed.worker - WARNING - Compute Failed
-Key:       ('shuffle-unpack-dd01a849e9c5fcd6d305f5cbafdd4e14', 6)
-Function:  shuffle_unpack
-args:      ('dd01a849e9c5fcd6d305f5cbafdd4e14', 6, None)
-kwargs:    {}
-Exception: "FileNotFoundError(2, 'No such file or directory')"
-
-2022-08-26 14:00:02,726 - distributed.worker - WARNING - Compute Failed
-Key:       ('shuffle-unpack-dd01a849e9c5fcd6d305f5cbafdd4e14', 7)
-Function:  shuffle_unpack
-args:      ('dd01a849e9c5fcd6d305f5cbafdd4e14', 7, None)
-kwargs:    {}
-Exception: "FileNotFoundError(2, 'No such file or directory')"
-
-2022-08-26 14:00:02,732 - distributed.worker - WARNING - Compute Failed
-Key:       ('shuffle-unpack-dd01a849e9c5fcd6d305f5cbafdd4e14', 0)
-Function:  shuffle_unpack
-args:      ('dd01a849e9c5fcd6d305f5cbafdd4e14', 0, None)
-kwargs:    {}
-Exception: "FileNotFoundError(2, 'No such file or directory')"
-
-2022-08-26 14:00:02,733 - distributed.worker - WARNING - Compute Failed
-Key:       ('shuffle-unpack-dd01a849e9c5fcd6d305f5cbafdd4e14', 8)
-Function:  shuffle_unpack
-args:      ('dd01a849e9c5fcd6d305f5cbafdd4e14', 8, None)
-kwargs:    {}
-Exception: "FileNotFoundError(2, 'No such file or directory')"
-
-2022-08-26 14:00:02,735 - distributed.diskutils - ERROR - Failed to remove '/tmp/dask-worker-space/worker-guxq1hnq' (failed in <built-in function lstat>): [Errno 2] No such file or directory: '/tmp/dask-worker-space/worker-guxq1hnq'
-2022-08-26 14:00:02,735 - distributed.diskutils - ERROR - Failed to remove '/tmp/dask-worker-space/worker-rpdexf1g' (failed in <built-in function lstat>): [Errno 2] No such file or directory: '/tmp/dask-worker-space/worker-rpdexf1g'
-PASSED
-distributed/shuffle/tests/test_shuffle.py::test_crashed_worker SKIPPED
-distributed/shuffle/tests/test_shuffle.py::test_heartbeat PASSED
-distributed/shuffle/tests/test_shuffle.py::test_processing_chain PASSED
-distributed/shuffle/tests/test_shuffle.py::test_head PASSED
-distributed/shuffle/tests/test_shuffle.py::test_split_by_worker PASSED
-distributed/shuffle/tests/test_shuffle.py::test_tail PASSED
-distributed/shuffle/tests/test_shuffle.py::test_repeat PASSED
-distributed/shuffle/tests/test_shuffle.py::test_new_worker PASSED
-distributed/shuffle/tests/test_shuffle.py::test_multi PASSED
-distributed/shuffle/tests/test_shuffle.py::test_restrictions PASSED
-distributed/shuffle/tests/test_shuffle.py::test_delete_some_results XPASS
-distributed/shuffle/tests/test_shuffle.py::test_add_some_results 2022-08-26 14:00:10,405 - distributed.worker - WARNING - Compute Failed
-Key:       ('shuffle-unpack-304467d696d993f50504717fec15edd6', 4)
-Function:  shuffle_unpack
-args:      ('304467d696d993f50504717fec15edd6', 4, None)
-kwargs:    {}
-Exception: "AssertionError('`get_output_partition` called before barrier task')"
-
-2022-08-26 14:00:10,409 - distributed.worker - WARNING - Compute Failed
-Key:       ('shuffle-unpack-304467d696d993f50504717fec15edd6', 5)
-Function:  shuffle_unpack
-args:      ('304467d696d993f50504717fec15edd6', 5, None)
-kwargs:    {}
-Exception: "AssertionError('`get_output_partition` called before barrier task')"
-
-2022-08-26 14:00:10,411 - distributed.worker - WARNING - Compute Failed
-Key:       ('shuffle-unpack-304467d696d993f50504717fec15edd6', 6)
-Function:  shuffle_unpack
-args:      ('304467d696d993f50504717fec15edd6', 6, None)
-kwargs:    {}
-Exception: "AssertionError('`get_output_partition` called before barrier task')"
-
-2022-08-26 14:00:10,416 - distributed.worker - WARNING - Compute Failed
-Key:       ('shuffle-unpack-304467d696d993f50504717fec15edd6', 7)
-Function:  shuffle_unpack
-args:      ('304467d696d993f50504717fec15edd6', 7, None)
-kwargs:    {}
-Exception: "AssertionError('`get_output_partition` called before barrier task')"
-
-2022-08-26 14:00:10,416 - distributed.worker - WARNING - Compute Failed
-Key:       ('shuffle-unpack-304467d696d993f50504717fec15edd6', 8)
-Function:  shuffle_unpack
-args:      ('304467d696d993f50504717fec15edd6', 8, None)
-kwargs:    {}
-Exception: "AssertionError('`get_output_partition` called before barrier task')"
-
-2022-08-26 14:00:10,502 - distributed.shuffle.multi_file - ERROR - 
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/runners.py", line 44, in run
-    return loop.run_until_complete(main)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/base_events.py", line 646, in run_until_complete
-    return future.result()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/utils_test.py", line 373, in inner_fn
-    return await async_fn(*args, **kwargs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/utils_test.py", line 1069, in async_fn_outer
-    return await asyncio.wait_for(async_fn(), timeout=timeout * 2)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/tasks.py", line 445, in wait_for
-    return fut.result()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/utils_test.py", line 971, in async_fn
-    result = await coro2
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/tasks.py", line 445, in wait_for
-    return fut.result()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/shuffle/tests/test_shuffle.py", line 441, in test_add_some_results
-    await c.compute(x.size)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/client.py", line 289, in _result
-    raise exc.with_traceback(tb)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/shuffle/shuffle.py", line 48, in shuffle_unpack
-    return get_ext().get_output_partition(id, output_partition)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/shuffle/shuffle_extension.py", line 323, in get_output_partition
-    output = shuffle.get_output_partition(output_partition)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/shuffle/shuffle_extension.py", line 188, in get_output_partition
-    assert self.transferred, "`get_output_partition` called before barrier task"
-AssertionError: `get_output_partition` called before barrier task
-
-During handling of the above exception, another exception occurred:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/shuffle/multi_file.py", line 158, in communicate
-    await asyncio.sleep(0.1)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/tasks.py", line 605, in sleep
-    return await future
-asyncio.exceptions.CancelledError
-XFAIL
-distributed/shuffle/tests/test_shuffle.py::test_clean_after_close 2022-08-26 14:00:10,892 - distributed.shuffle.multi_comm - ERROR - 
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 1302, in _connect
-    async def _connect(self, addr, timeout=None):
-asyncio.exceptions.CancelledError
-
-During handling of the above exception, another exception occurred:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/shuffle/multi_comm.py", line 182, in process
-    await self.send(address, [b"".join(shards)])
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/shuffle/shuffle_extension.py", line 80, in send
-    return await self.worker.rpc(address).shuffle_receive(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 1151, in send_recv_from_rpc
-    comm = await self.pool.connect(self.addr)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 1368, in connect
-    await connect_attempt
-asyncio.exceptions.CancelledError
-2022-08-26 14:00:10,894 - distributed.shuffle.multi_comm - ERROR - 
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 1302, in _connect
-    async def _connect(self, addr, timeout=None):
-asyncio.exceptions.CancelledError
-
-During handling of the above exception, another exception occurred:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/shuffle/multi_comm.py", line 182, in process
-    await self.send(address, [b"".join(shards)])
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/shuffle/shuffle_extension.py", line 80, in send
-    return await self.worker.rpc(address).shuffle_receive(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 1151, in send_recv_from_rpc
-    comm = await self.pool.connect(self.addr)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 1368, in connect
-    await connect_attempt
-asyncio.exceptions.CancelledError
-2022-08-26 14:00:10,896 - distributed.shuffle.multi_comm - ERROR - 
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 1302, in _connect
-    async def _connect(self, addr, timeout=None):
-asyncio.exceptions.CancelledError
-
-During handling of the above exception, another exception occurred:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/shuffle/multi_comm.py", line 182, in process
-    await self.send(address, [b"".join(shards)])
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/shuffle/shuffle_extension.py", line 80, in send
-    return await self.worker.rpc(address).shuffle_receive(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 1151, in send_recv_from_rpc
-    comm = await self.pool.connect(self.addr)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 1368, in connect
-    await connect_attempt
-asyncio.exceptions.CancelledError
-2022-08-26 14:00:10,897 - distributed.shuffle.multi_comm - ERROR - 
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 1302, in _connect
-    async def _connect(self, addr, timeout=None):
-asyncio.exceptions.CancelledError
-
-During handling of the above exception, another exception occurred:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/shuffle/multi_comm.py", line 182, in process
-    await self.send(address, [b"".join(shards)])
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/shuffle/shuffle_extension.py", line 80, in send
-    return await self.worker.rpc(address).shuffle_receive(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 1151, in send_recv_from_rpc
-    comm = await self.pool.connect(self.addr)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 1368, in connect
-    await connect_attempt
-asyncio.exceptions.CancelledError
-PASSED
-distributed/shuffle/tests/test_shuffle_extension.py::test_installation PASSED
-distributed/shuffle/tests/test_shuffle_extension.py::test_split_by_worker SKIPPED
-distributed/shuffle/tests/test_shuffle_extension.py::test_split_by_worker_many_workers SKIPPED
-distributed/shuffle/tests/test_shuffle_extension.py::test_split_by_partition PASSED
-distributed/tests/test_active_memory_manager.py::test_no_policies PASSED
-distributed/tests/test_active_memory_manager.py::test_drop PASSED
-distributed/tests/test_active_memory_manager.py::test_start_stop PASSED
-distributed/tests/test_active_memory_manager.py::test_auto_start PASSED
-distributed/tests/test_active_memory_manager.py::test_add_policy PASSED
-distributed/tests/test_active_memory_manager.py::test_multi_start PASSED
-distributed/tests/test_active_memory_manager.py::test_not_registered PASSED
-distributed/tests/test_active_memory_manager.py::test_client_proxy_sync 2022-08-26 14:00:14,996 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:00:14,999 - distributed.scheduler - INFO - State start
-2022-08-26 14:00:15,001 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:00:15,002 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:44623
-2022-08-26 14:00:15,002 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:00:15,009 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:44369
-2022-08-26 14:00:15,009 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:44369
-2022-08-26 14:00:15,009 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42439
-2022-08-26 14:00:15,009 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44623
-2022-08-26 14:00:15,009 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:15,009 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:00:15,009 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:00:15,009 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37581
-2022-08-26 14:00:15,009 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ryr118lh
-2022-08-26 14:00:15,009 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37581
-2022-08-26 14:00:15,009 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:15,009 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36915
-2022-08-26 14:00:15,009 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44623
-2022-08-26 14:00:15,009 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:15,009 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:00:15,009 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:00:15,009 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ge774vwo
-2022-08-26 14:00:15,009 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:15,229 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:44369', status: init, memory: 0, processing: 0>
-2022-08-26 14:00:15,446 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:44369
-2022-08-26 14:00:15,446 - distributed.core - INFO - Starting established connection
-2022-08-26 14:00:15,446 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44623
-2022-08-26 14:00:15,447 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:15,447 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37581', status: init, memory: 0, processing: 0>
-2022-08-26 14:00:15,448 - distributed.core - INFO - Starting established connection
-2022-08-26 14:00:15,448 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37581
-2022-08-26 14:00:15,448 - distributed.core - INFO - Starting established connection
-2022-08-26 14:00:15,448 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44623
-2022-08-26 14:00:15,448 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:15,449 - distributed.core - INFO - Starting established connection
-2022-08-26 14:00:15,454 - distributed.scheduler - INFO - Receive client connection: Client-1577f974-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:00:15,454 - distributed.core - INFO - Starting established connection
-PASSED2022-08-26 14:00:15,466 - distributed.scheduler - INFO - Remove client Client-1577f974-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:00:15,466 - distributed.scheduler - INFO - Remove client Client-1577f974-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:00:15,466 - distributed.scheduler - INFO - Close client connection: Client-1577f974-2582-11ed-a99d-00d861bc4509
-
-distributed/tests/test_active_memory_manager.py::test_client_proxy_async PASSED
-distributed/tests/test_active_memory_manager.py::test_drop_not_in_memory PASSED
-distributed/tests/test_active_memory_manager.py::test_drop_with_waiter PASSED
-distributed/tests/test_active_memory_manager.py::test_double_drop PASSED
-distributed/tests/test_active_memory_manager.py::test_double_drop_stress PASSED
-distributed/tests/test_active_memory_manager.py::test_drop_from_worker_with_least_free_memory SKIPPED
-distributed/tests/test_active_memory_manager.py::test_drop_with_candidates PASSED
-distributed/tests/test_active_memory_manager.py::test_drop_with_empty_candidates PASSED
-distributed/tests/test_active_memory_manager.py::test_drop_from_candidates_without_key PASSED
-distributed/tests/test_active_memory_manager.py::test_drop_with_bad_candidates PASSED
-distributed/tests/test_active_memory_manager.py::test_drop_prefers_paused_workers PASSED
-distributed/tests/test_active_memory_manager.py::test_drop_with_paused_workers_with_running_tasks_1 PASSED
-distributed/tests/test_active_memory_manager.py::test_drop_with_paused_workers_with_running_tasks_2 PASSED
-distributed/tests/test_active_memory_manager.py::test_drop_with_paused_workers_with_running_tasks_3_4[True] PASSED
-distributed/tests/test_active_memory_manager.py::test_drop_with_paused_workers_with_running_tasks_3_4[False] PASSED
-distributed/tests/test_active_memory_manager.py::test_drop_with_paused_workers_with_running_tasks_5 PASSED
-distributed/tests/test_active_memory_manager.py::test_replicate PASSED
-distributed/tests/test_active_memory_manager.py::test_replicate_not_in_memory PASSED
-distributed/tests/test_active_memory_manager.py::test_double_replicate_stress PASSED
-distributed/tests/test_active_memory_manager.py::test_replicate_to_worker_with_most_free_memory SKIPPED
-distributed/tests/test_active_memory_manager.py::test_replicate_with_candidates PASSED
-distributed/tests/test_active_memory_manager.py::test_replicate_with_empty_candidates PASSED
-distributed/tests/test_active_memory_manager.py::test_replicate_to_candidates_with_key PASSED
-distributed/tests/test_active_memory_manager.py::test_replicate_avoids_paused_workers_1 PASSED
-distributed/tests/test_active_memory_manager.py::test_replicate_avoids_paused_workers_2 PASSED
-distributed/tests/test_active_memory_manager.py::test_ReduceReplicas PASSED
-distributed/tests/test_active_memory_manager.py::test_RetireWorker_amm_on_off[False] PASSED
-distributed/tests/test_active_memory_manager.py::test_RetireWorker_amm_on_off[True] PASSED
-distributed/tests/test_active_memory_manager.py::test_RetireWorker_no_remove PASSED
-distributed/tests/test_active_memory_manager.py::test_RetireWorker_with_ReduceReplicas[False] SKIPPED
-distributed/tests/test_active_memory_manager.py::test_RetireWorker_with_ReduceReplicas[True] SKIPPED
-distributed/tests/test_active_memory_manager.py::test_RetireWorker_all_replicas_are_being_retired PASSED
-distributed/tests/test_active_memory_manager.py::test_RetireWorker_no_recipients 2022-08-26 14:00:25,725 - distributed.active_memory_manager - WARNING - Tried retiring worker tcp://127.0.0.1:40057, but 1 tasks could not be moved as there are no suitable workers to receive them. The worker will not be retired.
-2022-08-26 14:00:25,725 - distributed.active_memory_manager - WARNING - Tried retiring worker tcp://127.0.0.1:46863, but 1 tasks could not be moved as there are no suitable workers to receive them. The worker will not be retired.
-PASSED
-distributed/tests/test_active_memory_manager.py::test_RetireWorker_all_recipients_are_paused 2022-08-26 14:00:25,981 - distributed.active_memory_manager - WARNING - Tried retiring worker tcp://127.0.0.1:35707, but 1 tasks could not be moved as there are no suitable workers to receive them. The worker will not be retired.
-PASSED
-distributed/tests/test_active_memory_manager.py::test_RetireWorker_new_keys_arrive_after_all_keys_moved_away PASSED
-distributed/tests/test_active_memory_manager.py::test_RetireWorker_faulty_recipient SKIPPED
-distributed/tests/test_active_memory_manager.py::test_drop_stress SKIPPED
-distributed/tests/test_active_memory_manager.py::test_ReduceReplicas_stress SKIPPED
-distributed/tests/test_active_memory_manager.py::test_RetireWorker_stress[False] SKIPPED
-distributed/tests/test_active_memory_manager.py::test_RetireWorker_stress[True] SKIPPED
-distributed/tests/test_actor.py::test_client_actions[True] PASSED
-distributed/tests/test_actor.py::test_client_actions[False] PASSED
-distributed/tests/test_actor.py::test_worker_actions[False] PASSED
-distributed/tests/test_actor.py::test_worker_actions[True] PASSED
-distributed/tests/test_actor.py::test_Actor PASSED
-distributed/tests/test_actor.py::test_linear_access XFAIL (Tornado c...)
-distributed/tests/test_actor.py::test_exceptions_create 2022-08-26 14:00:28,902 - distributed.worker - WARNING - Compute Failed
-Key:       Foo-e69bb1b3-c5b9-455e-a967-b34941446084
-Function:  Foo
-args:      ()
-kwargs:    {}
-Exception: "ValueError('bar')"
-
-PASSED
-distributed/tests/test_actor.py::test_exceptions_method PASSED
-distributed/tests/test_actor.py::test_gc PASSED
-distributed/tests/test_actor.py::test_track_dependencies PASSED
-distributed/tests/test_actor.py::test_future PASSED
-distributed/tests/test_actor.py::test_future_dependencies PASSED
-distributed/tests/test_actor.py::test_sync 2022-08-26 14:00:31,590 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:00:31,593 - distributed.scheduler - INFO - State start
-2022-08-26 14:00:31,596 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:00:31,596 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:41827
-2022-08-26 14:00:31,596 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:00:31,630 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42059
-2022-08-26 14:00:31,630 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42059
-2022-08-26 14:00:31,630 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36101
-2022-08-26 14:00:31,630 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:41827
-2022-08-26 14:00:31,631 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:31,631 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:00:31,631 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:00:31,631 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-u_by08z9
-2022-08-26 14:00:31,631 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36187
-2022-08-26 14:00:31,631 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:31,631 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36187
-2022-08-26 14:00:31,631 - distributed.worker - INFO -          dashboard at:            127.0.0.1:44309
-2022-08-26 14:00:31,631 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:41827
-2022-08-26 14:00:31,631 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:31,631 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:00:31,631 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:00:31,631 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-3h4zrq9s
-2022-08-26 14:00:31,631 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:31,870 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36187', status: init, memory: 0, processing: 0>
-2022-08-26 14:00:32,051 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36187
-2022-08-26 14:00:32,051 - distributed.core - INFO - Starting established connection
-2022-08-26 14:00:32,051 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:41827
-2022-08-26 14:00:32,051 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:32,052 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42059', status: init, memory: 0, processing: 0>
-2022-08-26 14:00:32,052 - distributed.core - INFO - Starting established connection
-2022-08-26 14:00:32,052 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42059
-2022-08-26 14:00:32,053 - distributed.core - INFO - Starting established connection
-2022-08-26 14:00:32,053 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:41827
-2022-08-26 14:00:32,053 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:32,054 - distributed.core - INFO - Starting established connection
-2022-08-26 14:00:32,059 - distributed.scheduler - INFO - Receive client connection: Client-1f5dacaa-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:00:32,059 - distributed.core - INFO - Starting established connection
-PASSED2022-08-26 14:00:32,096 - distributed.scheduler - INFO - Remove client Client-1f5dacaa-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:00:32,097 - distributed.scheduler - INFO - Remove client Client-1f5dacaa-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:00:32,097 - distributed.scheduler - INFO - Close client connection: Client-1f5dacaa-2582-11ed-a99d-00d861bc4509
-
-distributed/tests/test_actor.py::test_timeout 2022-08-26 14:00:32,799 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:00:32,802 - distributed.scheduler - INFO - State start
-2022-08-26 14:00:32,804 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:00:32,804 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:43191
-2022-08-26 14:00:32,804 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:00:32,834 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-u_by08z9', purging
-2022-08-26 14:00:32,834 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-3h4zrq9s', purging
-2022-08-26 14:00:32,840 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37691
-2022-08-26 14:00:32,840 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37691
-2022-08-26 14:00:32,840 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45967
-2022-08-26 14:00:32,840 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43191
-2022-08-26 14:00:32,840 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:32,840 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:00:32,840 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:00:32,840 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-988jigfp
-2022-08-26 14:00:32,840 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:32,840 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43941
-2022-08-26 14:00:32,840 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43941
-2022-08-26 14:00:32,840 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36407
-2022-08-26 14:00:32,840 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43191
-2022-08-26 14:00:32,840 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:32,840 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:00:32,840 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:00:32,840 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-83hf74xb
-2022-08-26 14:00:32,840 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:33,081 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:43941', status: init, memory: 0, processing: 0>
-2022-08-26 14:00:33,266 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:43941
-2022-08-26 14:00:33,266 - distributed.core - INFO - Starting established connection
-2022-08-26 14:00:33,266 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43191
-2022-08-26 14:00:33,266 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:33,267 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37691', status: init, memory: 0, processing: 0>
-2022-08-26 14:00:33,267 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37691
-2022-08-26 14:00:33,267 - distributed.core - INFO - Starting established connection
-2022-08-26 14:00:33,267 - distributed.core - INFO - Starting established connection
-2022-08-26 14:00:33,267 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43191
-2022-08-26 14:00:33,268 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:33,268 - distributed.core - INFO - Starting established connection
-2022-08-26 14:00:33,273 - distributed.scheduler - INFO - Receive client connection: Client-2017007c-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:00:33,273 - distributed.core - INFO - Starting established connection
-PASSED2022-08-26 14:00:33,306 - distributed.scheduler - INFO - Remove client Client-2017007c-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:00:33,307 - distributed.scheduler - INFO - Remove client Client-2017007c-2582-11ed-a99d-00d861bc4509
-
-distributed/tests/test_actor.py::test_failed_worker PASSED
-distributed/tests/test_actor.py::test_numpy_roundtrip PASSED
-distributed/tests/test_actor.py::test_numpy_roundtrip_getattr PASSED
-distributed/tests/test_actor.py::test_repr PASSED
-distributed/tests/test_actor.py::test_dir PASSED
-distributed/tests/test_actor.py::test_many_computations PASSED
-distributed/tests/test_actor.py::test_thread_safety PASSED
-distributed/tests/test_actor.py::test_Actors_create_dependencies PASSED
-distributed/tests/test_actor.py::test_load_balance PASSED
-distributed/tests/test_actor.py::test_load_balance_map PASSED
-distributed/tests/test_actor.py::test_compute SKIPPED (need --runslo...)
-distributed/tests/test_actor.py::test_compute_sync 2022-08-26 14:00:36,610 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:00:36,613 - distributed.scheduler - INFO - State start
-2022-08-26 14:00:36,615 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:00:36,615 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:37441
-2022-08-26 14:00:36,615 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:00:36,651 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36807
-2022-08-26 14:00:36,651 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36807
-2022-08-26 14:00:36,651 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45597
-2022-08-26 14:00:36,651 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37441
-2022-08-26 14:00:36,651 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:36,651 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:00:36,651 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:00:36,651 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-lykdapsa
-2022-08-26 14:00:36,651 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:36,655 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40005
-2022-08-26 14:00:36,656 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40005
-2022-08-26 14:00:36,656 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42221
-2022-08-26 14:00:36,656 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37441
-2022-08-26 14:00:36,656 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:36,656 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:00:36,656 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:00:36,656 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-mjadqn3h
-2022-08-26 14:00:36,656 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:36,844 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40005', status: init, memory: 0, processing: 0>
-2022-08-26 14:00:37,031 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40005
-2022-08-26 14:00:37,031 - distributed.core - INFO - Starting established connection
-2022-08-26 14:00:37,031 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37441
-2022-08-26 14:00:37,031 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:37,032 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36807', status: init, memory: 0, processing: 0>
-2022-08-26 14:00:37,032 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36807
-2022-08-26 14:00:37,032 - distributed.core - INFO - Starting established connection
-2022-08-26 14:00:37,033 - distributed.core - INFO - Starting established connection
-2022-08-26 14:00:37,032 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37441
-2022-08-26 14:00:37,033 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:37,033 - distributed.core - INFO - Starting established connection
-2022-08-26 14:00:37,039 - distributed.scheduler - INFO - Receive client connection: Client-22559423-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:00:37,039 - distributed.core - INFO - Starting established connection
-2022-08-26 14:00:37,070 - distributed.scheduler - INFO - Receive client connection: Client-worker-225a20b4-2582-11ed-909d-00d861bc4509
-2022-08-26 14:00:37,071 - distributed.core - INFO - Starting established connection
-2022-08-26 14:00:37,084 - distributed.worker - INFO - Run out-of-band function 'check'
-/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py:3498: FutureWarning: The `Worker.actors` attribute has been moved to `Worker.state.actors`
-  warnings.warn(
-2022-08-26 14:00:37,085 - distributed.worker - INFO - Run out-of-band function 'check'
-/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py:3498: FutureWarning: The `Worker.actors` attribute has been moved to `Worker.state.actors`
-  warnings.warn(
-2022-08-26 14:00:37,099 - distributed.worker - INFO - Run out-of-band function 'check'
-2022-08-26 14:00:37,099 - distributed.worker - INFO - Run out-of-band function 'check'
-PASSED2022-08-26 14:00:37,100 - distributed.scheduler - INFO - Remove client Client-22559423-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:00:37,101 - distributed.scheduler - INFO - Remove client Client-22559423-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:00:37,101 - distributed.scheduler - INFO - Close client connection: Client-22559423-2582-11ed-a99d-00d861bc4509
-
-distributed/tests/test_actor.py::test_actors_in_profile PASSED
-distributed/tests/test_actor.py::test_waiter PASSED
-distributed/tests/test_actor.py::test_worker_actor_handle_is_weakref 2022-08-26 14:00:38,142 - distributed.client - ERROR - 
-ConnectionRefusedError: [Errno 111] Connection refused
-
-The above exception was the direct cause of the following exception:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/core.py", line 291, in connect
-    comm = await asyncio.wait_for(
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/tasks.py", line 445, in wait_for
-    return fut.result()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 496, in connect
-    convert_stream_closed_error(self, e)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 142, in convert_stream_closed_error
-    raise CommClosedError(f"in {obj}: {exc.__class__.__name__}: {exc}") from exc
-distributed.comm.core.CommClosedError: in <distributed.comm.tcp.TCPConnector object at 0x564040546210>: ConnectionRefusedError: [Errno 111] Connection refused
-
-During handling of the above exception, another exception occurred:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/utils.py", line 799, in wrapper
-    return await func(*args, **kwargs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/client.py", line 1246, in _reconnect
-    await self._ensure_connected(timeout=timeout)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/client.py", line 1276, in _ensure_connected
-    comm = await connect(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/core.py", line 315, in connect
-    await asyncio.sleep(backoff)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/tasks.py", line 605, in sleep
-    return await future
-asyncio.exceptions.CancelledError
-PASSED
-distributed/tests/test_actor.py::test_worker_actor_handle_is_weakref_sync 2022-08-26 14:00:39,041 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:00:39,043 - distributed.scheduler - INFO - State start
-2022-08-26 14:00:39,046 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:00:39,046 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:39139
-2022-08-26 14:00:39,046 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:00:39,081 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33737
-2022-08-26 14:00:39,081 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33737
-2022-08-26 14:00:39,081 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33955
-2022-08-26 14:00:39,081 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39139
-2022-08-26 14:00:39,081 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:39,081 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:00:39,082 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:00:39,082 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-918br_bx
-2022-08-26 14:00:39,082 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:39,086 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39775
-2022-08-26 14:00:39,086 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39775
-2022-08-26 14:00:39,086 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35089
-2022-08-26 14:00:39,086 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39139
-2022-08-26 14:00:39,086 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:39,086 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:00:39,086 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:00:39,086 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-8qekfqg1
-2022-08-26 14:00:39,086 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:39,276 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39775', status: init, memory: 0, processing: 0>
-2022-08-26 14:00:39,466 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39775
-2022-08-26 14:00:39,466 - distributed.core - INFO - Starting established connection
-2022-08-26 14:00:39,466 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39139
-2022-08-26 14:00:39,466 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:39,466 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33737', status: init, memory: 0, processing: 0>
-2022-08-26 14:00:39,467 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33737
-2022-08-26 14:00:39,467 - distributed.core - INFO - Starting established connection
-2022-08-26 14:00:39,467 - distributed.core - INFO - Starting established connection
-2022-08-26 14:00:39,467 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39139
-2022-08-26 14:00:39,468 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:39,468 - distributed.core - INFO - Starting established connection
-2022-08-26 14:00:39,473 - distributed.scheduler - INFO - Receive client connection: Client-23c907f2-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:00:39,473 - distributed.core - INFO - Starting established connection
-2022-08-26 14:00:39,477 - distributed.worker - INFO - Run out-of-band function 'lambda'
-2022-08-26 14:00:39,477 - distributed.worker - INFO - Run out-of-band function 'lambda'
-2022-08-26 14:00:39,501 - distributed.scheduler - INFO - Receive client connection: Client-worker-23ccfe8b-2582-11ed-9164-00d861bc4509
-2022-08-26 14:00:39,501 - distributed.core - INFO - Starting established connection
-2022-08-26 14:00:39,504 - distributed.worker - INFO - Run out-of-band function 'check'
-/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py:3498: FutureWarning: The `Worker.actors` attribute has been moved to `Worker.state.actors`
-  warnings.warn(
-2022-08-26 14:00:39,506 - distributed.worker - INFO - Run out-of-band function 'check'
-/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py:3498: FutureWarning: The `Worker.actors` attribute has been moved to `Worker.state.actors`
-  warnings.warn(
-2022-08-26 14:00:39,519 - distributed.worker - INFO - Run out-of-band function 'check'
-2022-08-26 14:00:39,519 - distributed.worker - INFO - Run out-of-band function 'check'
-PASSED2022-08-26 14:00:39,521 - distributed.scheduler - INFO - Remove client Client-23c907f2-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:00:39,521 - distributed.scheduler - INFO - Remove client Client-23c907f2-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:00:39,521 - distributed.scheduler - INFO - Close client connection: Client-23c907f2-2582-11ed-a99d-00d861bc4509
-
-distributed/tests/test_actor.py::test_worker_actor_handle_is_weakref_from_compute_sync 2022-08-26 14:00:40,241 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:00:40,244 - distributed.scheduler - INFO - State start
-2022-08-26 14:00:40,247 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:00:40,247 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:43255
-2022-08-26 14:00:40,247 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:00:40,250 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-8qekfqg1', purging
-2022-08-26 14:00:40,250 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-918br_bx', purging
-2022-08-26 14:00:40,290 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43351
-2022-08-26 14:00:40,290 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43351
-2022-08-26 14:00:40,290 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34449
-2022-08-26 14:00:40,290 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43255
-2022-08-26 14:00:40,290 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40349
-2022-08-26 14:00:40,290 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:40,290 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40349
-2022-08-26 14:00:40,290 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40721
-2022-08-26 14:00:40,290 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:00:40,290 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43255
-2022-08-26 14:00:40,290 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:00:40,290 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:40,290 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-grw14sze
-2022-08-26 14:00:40,290 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:00:40,290 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:00:40,290 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:40,290 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-icph0pqr
-2022-08-26 14:00:40,290 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:40,480 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40349', status: init, memory: 0, processing: 0>
-2022-08-26 14:00:40,661 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40349
-2022-08-26 14:00:40,662 - distributed.core - INFO - Starting established connection
-2022-08-26 14:00:40,662 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43255
-2022-08-26 14:00:40,662 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:40,662 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:43351', status: init, memory: 0, processing: 0>
-2022-08-26 14:00:40,663 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:43351
-2022-08-26 14:00:40,663 - distributed.core - INFO - Starting established connection
-2022-08-26 14:00:40,663 - distributed.core - INFO - Starting established connection
-2022-08-26 14:00:40,663 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43255
-2022-08-26 14:00:40,663 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:40,664 - distributed.core - INFO - Starting established connection
-2022-08-26 14:00:40,669 - distributed.scheduler - INFO - Receive client connection: Client-247f92db-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:00:40,669 - distributed.core - INFO - Starting established connection
-2022-08-26 14:00:40,673 - distributed.worker - INFO - Run out-of-band function 'lambda'
-2022-08-26 14:00:40,673 - distributed.worker - INFO - Run out-of-band function 'lambda'
-2022-08-26 14:00:40,697 - distributed.scheduler - INFO - Receive client connection: Client-worker-248380d6-2582-11ed-917c-00d861bc4509
-2022-08-26 14:00:40,697 - distributed.core - INFO - Starting established connection
-2022-08-26 14:00:40,705 - distributed.worker - INFO - Run out-of-band function 'worker_tasks_running'
-/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py:3498: FutureWarning: The `Worker.actors` attribute has been moved to `Worker.state.actors`
-  warnings.warn(
-2022-08-26 14:00:40,707 - distributed.worker - INFO - Run out-of-band function 'worker_tasks_running'
-/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py:3498: FutureWarning: The `Worker.actors` attribute has been moved to `Worker.state.actors`
-  warnings.warn(
-2022-08-26 14:00:40,720 - distributed.worker - INFO - Run out-of-band function 'worker_tasks_running'
-2022-08-26 14:00:40,720 - distributed.worker - INFO - Run out-of-band function 'worker_tasks_running'
-PASSED2022-08-26 14:00:40,722 - distributed.scheduler - INFO - Remove client Client-247f92db-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:00:40,722 - distributed.scheduler - INFO - Remove client Client-247f92db-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:00:40,722 - distributed.scheduler - INFO - Close client connection: Client-247f92db-2582-11ed-a99d-00d861bc4509
-
-distributed/tests/test_actor.py::test_one_thread_deadlock 2022-08-26 14:00:41,434 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:00:41,436 - distributed.scheduler - INFO - State start
-2022-08-26 14:00:41,439 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:00:41,439 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:34037
-2022-08-26 14:00:41,439 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:00:41,443 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-icph0pqr', purging
-2022-08-26 14:00:41,443 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-grw14sze', purging
-2022-08-26 14:00:41,480 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40971
-2022-08-26 14:00:41,480 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40971
-2022-08-26 14:00:41,480 - distributed.worker - INFO -          dashboard at:            127.0.0.1:32793
-2022-08-26 14:00:41,480 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34037
-2022-08-26 14:00:41,480 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:41,480 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:00:41,480 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:00:41,480 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-mqs41g2d
-2022-08-26 14:00:41,480 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:41,667 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40971', status: init, memory: 0, processing: 0>
-2022-08-26 14:00:41,849 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40971
-2022-08-26 14:00:41,850 - distributed.core - INFO - Starting established connection
-2022-08-26 14:00:41,850 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34037
-2022-08-26 14:00:41,850 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:41,851 - distributed.core - INFO - Starting established connection
-2022-08-26 14:00:41,856 - distributed.scheduler - INFO - Receive client connection: Client-25348de9-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:00:41,856 - distributed.core - INFO - Starting established connection
-/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py:3498: FutureWarning: The `Worker.actors` attribute has been moved to `Worker.state.actors`
-  warnings.warn(
-2022-08-26 14:00:41,888 - distributed.scheduler - INFO - Receive client connection: Client-worker-253963f3-2582-11ed-9192-00d861bc4509
-2022-08-26 14:00:41,888 - distributed.core - INFO - Starting established connection
-2022-08-26 14:00:41,890 - distributed.scheduler - INFO - Remove client Client-25348de9-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:00:41,890 - distributed.scheduler - INFO - Remove client Client-25348de9-2582-11ed-a99d-00d861bc4509
-PASSED
-distributed/tests/test_actor.py::test_one_thread_deadlock_timeout 2022-08-26 14:00:42,600 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:00:42,603 - distributed.scheduler - INFO - State start
-2022-08-26 14:00:42,606 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:00:42,606 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:35615
-2022-08-26 14:00:42,606 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:00:42,610 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-mqs41g2d', purging
-2022-08-26 14:00:42,641 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42701
-2022-08-26 14:00:42,641 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42701
-2022-08-26 14:00:42,641 - distributed.worker - INFO -          dashboard at:            127.0.0.1:44179
-2022-08-26 14:00:42,641 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35615
-2022-08-26 14:00:42,641 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:42,641 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:00:42,641 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:00:42,641 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ps2pf8mi
-2022-08-26 14:00:42,641 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:42,827 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42701', status: init, memory: 0, processing: 0>
-2022-08-26 14:00:43,009 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42701
-2022-08-26 14:00:43,009 - distributed.core - INFO - Starting established connection
-2022-08-26 14:00:43,010 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35615
-2022-08-26 14:00:43,010 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:43,011 - distributed.core - INFO - Starting established connection
-2022-08-26 14:00:43,015 - distributed.scheduler - INFO - Receive client connection: Client-25e5841a-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:00:43,015 - distributed.core - INFO - Starting established connection
-/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py:3498: FutureWarning: The `Worker.actors` attribute has been moved to `Worker.state.actors`
-  warnings.warn(
-2022-08-26 14:00:43,047 - distributed.scheduler - INFO - Receive client connection: Client-worker-25ea35d7-2582-11ed-9244-00d861bc4509
-2022-08-26 14:00:43,048 - distributed.core - INFO - Starting established connection
-2022-08-26 14:00:43,048 - distributed.scheduler - INFO - Remove client Client-25e5841a-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:00:43,048 - distributed.scheduler - INFO - Remove client Client-25e5841a-2582-11ed-a99d-00d861bc4509
-PASSED
-distributed/tests/test_actor.py::test_one_thread_deadlock_sync_client 2022-08-26 14:00:43,760 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:00:43,762 - distributed.scheduler - INFO - State start
-2022-08-26 14:00:43,765 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:00:43,765 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:39737
-2022-08-26 14:00:43,765 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:00:43,769 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-ps2pf8mi', purging
-2022-08-26 14:00:43,812 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36979
-2022-08-26 14:00:43,812 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36979
-2022-08-26 14:00:43,812 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36847
-2022-08-26 14:00:43,812 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39737
-2022-08-26 14:00:43,812 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:43,812 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:00:43,812 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:00:43,812 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ijl07vav
-2022-08-26 14:00:43,812 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:43,995 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36979', status: init, memory: 0, processing: 0>
-2022-08-26 14:00:44,177 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36979
-2022-08-26 14:00:44,178 - distributed.core - INFO - Starting established connection
-2022-08-26 14:00:44,178 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39737
-2022-08-26 14:00:44,178 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:44,179 - distributed.core - INFO - Starting established connection
-2022-08-26 14:00:44,184 - distributed.scheduler - INFO - Receive client connection: Client-2697d3aa-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:00:44,184 - distributed.core - INFO - Starting established connection
-/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py:3498: FutureWarning: The `Worker.actors` attribute has been moved to `Worker.state.actors`
-  warnings.warn(
-2022-08-26 14:00:44,216 - distributed.scheduler - INFO - Receive client connection: Client-worker-269c91d4-2582-11ed-9256-00d861bc4509
-2022-08-26 14:00:44,216 - distributed.core - INFO - Starting established connection
-2022-08-26 14:00:44,217 - distributed.scheduler - INFO - Remove client Client-2697d3aa-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:00:44,218 - distributed.scheduler - INFO - Remove client Client-2697d3aa-2582-11ed-a99d-00d861bc4509
-PASSED
-distributed/tests/test_actor.py::test_async_deadlock PASSED
-distributed/tests/test_actor.py::test_exception 2022-08-26 14:00:45,183 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:00:45,185 - distributed.scheduler - INFO - State start
-2022-08-26 14:00:45,188 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:00:45,188 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:46787
-2022-08-26 14:00:45,188 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:00:45,222 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:41731
-2022-08-26 14:00:45,222 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:41731
-2022-08-26 14:00:45,222 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42413
-2022-08-26 14:00:45,222 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:46787
-2022-08-26 14:00:45,222 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:45,222 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:00:45,222 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:00:45,222 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-8uyaokhq
-2022-08-26 14:00:45,222 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:45,247 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40347
-2022-08-26 14:00:45,247 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40347
-2022-08-26 14:00:45,247 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39215
-2022-08-26 14:00:45,247 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:46787
-2022-08-26 14:00:45,247 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:45,247 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:00:45,247 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:00:45,247 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-khg7x4bc
-2022-08-26 14:00:45,247 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:45,426 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:41731', status: init, memory: 0, processing: 0>
-2022-08-26 14:00:45,609 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:41731
-2022-08-26 14:00:45,609 - distributed.core - INFO - Starting established connection
-2022-08-26 14:00:45,609 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:46787
-2022-08-26 14:00:45,609 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:45,610 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40347', status: init, memory: 0, processing: 0>
-2022-08-26 14:00:45,610 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40347
-2022-08-26 14:00:45,610 - distributed.core - INFO - Starting established connection
-2022-08-26 14:00:45,610 - distributed.core - INFO - Starting established connection
-2022-08-26 14:00:45,610 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:46787
-2022-08-26 14:00:45,611 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:45,612 - distributed.core - INFO - Starting established connection
-2022-08-26 14:00:45,616 - distributed.scheduler - INFO - Receive client connection: Client-27726ae7-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:00:45,616 - distributed.core - INFO - Starting established connection
-2022-08-26 14:00:45,651 - distributed.scheduler - INFO - Remove client Client-27726ae7-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:00:45,651 - distributed.scheduler - INFO - Remove client Client-27726ae7-2582-11ed-a99d-00d861bc4509
-PASSED
-distributed/tests/test_actor.py::test_exception_async PASSED
-distributed/tests/test_actor.py::test_as_completed 2022-08-26 14:00:46,621 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:00:46,623 - distributed.scheduler - INFO - State start
-2022-08-26 14:00:46,626 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:00:46,626 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:37741
-2022-08-26 14:00:46,626 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:00:46,659 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:46087
-2022-08-26 14:00:46,659 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:46087
-2022-08-26 14:00:46,660 - distributed.worker - INFO -          dashboard at:            127.0.0.1:44011
-2022-08-26 14:00:46,660 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37741
-2022-08-26 14:00:46,660 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:46,660 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:00:46,660 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:00:46,660 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-iuos1zc6
-2022-08-26 14:00:46,660 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:46,698 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33969
-2022-08-26 14:00:46,698 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33969
-2022-08-26 14:00:46,698 - distributed.worker - INFO -          dashboard at:            127.0.0.1:44671
-2022-08-26 14:00:46,699 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37741
-2022-08-26 14:00:46,699 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:46,699 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:00:46,699 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:00:46,699 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-x7iroqzv
-2022-08-26 14:00:46,699 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:46,872 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:46087', status: init, memory: 0, processing: 0>
-2022-08-26 14:00:47,057 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:46087
-2022-08-26 14:00:47,057 - distributed.core - INFO - Starting established connection
-2022-08-26 14:00:47,057 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37741
-2022-08-26 14:00:47,058 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:47,058 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33969', status: init, memory: 0, processing: 0>
-2022-08-26 14:00:47,058 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33969
-2022-08-26 14:00:47,058 - distributed.core - INFO - Starting established connection
-2022-08-26 14:00:47,059 - distributed.core - INFO - Starting established connection
-2022-08-26 14:00:47,059 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37741
-2022-08-26 14:00:47,059 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:47,060 - distributed.core - INFO - Starting established connection
-2022-08-26 14:00:47,065 - distributed.scheduler - INFO - Receive client connection: Client-284f65c0-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:00:47,065 - distributed.core - INFO - Starting established connection
-PASSED2022-08-26 14:00:47,106 - distributed.scheduler - INFO - Remove client Client-284f65c0-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:00:47,107 - distributed.scheduler - INFO - Remove client Client-284f65c0-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:00:47,107 - distributed.scheduler - INFO - Close client connection: Client-284f65c0-2582-11ed-a99d-00d861bc4509
-
-distributed/tests/test_actor.py::test_actor_future_awaitable PASSED
-distributed/tests/test_actor.py::test_actor_future_awaitable_deadlock PASSED
-distributed/tests/test_actor.py::test_serialize_with_pickle PASSED
-distributed/tests/test_as_completed.py::test_as_completed_async PASSED
-distributed/tests/test_as_completed.py::test_as_completed_sync 2022-08-26 14:00:48,802 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:00:48,804 - distributed.scheduler - INFO - State start
-2022-08-26 14:00:48,806 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:00:48,806 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:43533
-2022-08-26 14:00:48,807 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:00:48,841 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:44823
-2022-08-26 14:00:48,841 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:44823
-2022-08-26 14:00:48,841 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33659
-2022-08-26 14:00:48,841 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43533
-2022-08-26 14:00:48,841 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:48,841 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:00:48,841 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:00:48,841 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-3jdjvhar
-2022-08-26 14:00:48,841 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:48,844 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40687
-2022-08-26 14:00:48,844 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40687
-2022-08-26 14:00:48,844 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46029
-2022-08-26 14:00:48,844 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43533
-2022-08-26 14:00:48,844 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:48,844 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:00:48,845 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:00:48,845 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-p9q4wkhy
-2022-08-26 14:00:48,845 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:49,033 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40687', status: init, memory: 0, processing: 0>
-2022-08-26 14:00:49,225 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40687
-2022-08-26 14:00:49,225 - distributed.core - INFO - Starting established connection
-2022-08-26 14:00:49,225 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43533
-2022-08-26 14:00:49,225 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:49,226 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:44823', status: init, memory: 0, processing: 0>
-2022-08-26 14:00:49,226 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:44823
-2022-08-26 14:00:49,226 - distributed.core - INFO - Starting established connection
-2022-08-26 14:00:49,226 - distributed.core - INFO - Starting established connection
-2022-08-26 14:00:49,226 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43533
-2022-08-26 14:00:49,227 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:49,227 - distributed.core - INFO - Starting established connection
-2022-08-26 14:00:49,232 - distributed.scheduler - INFO - Receive client connection: Client-299a2dcc-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:00:49,233 - distributed.core - INFO - Starting established connection
-PASSED2022-08-26 14:00:49,255 - distributed.scheduler - INFO - Remove client Client-299a2dcc-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:00:49,255 - distributed.scheduler - INFO - Remove client Client-299a2dcc-2582-11ed-a99d-00d861bc4509
-
-distributed/tests/test_as_completed.py::test_as_completed_with_non_futures 2022-08-26 14:00:49,983 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:00:49,985 - distributed.scheduler - INFO - State start
-2022-08-26 14:00:49,987 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:00:49,988 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:39573
-2022-08-26 14:00:49,988 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:00:50,018 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-3jdjvhar', purging
-2022-08-26 14:00:50,018 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-p9q4wkhy', purging
-2022-08-26 14:00:50,023 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:44491
-2022-08-26 14:00:50,023 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:44491
-2022-08-26 14:00:50,023 - distributed.worker - INFO -          dashboard at:            127.0.0.1:37783
-2022-08-26 14:00:50,023 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39573
-2022-08-26 14:00:50,023 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:50,023 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:00:50,023 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:00:50,023 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-d7cq1q9g
-2022-08-26 14:00:50,023 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:50,026 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39845
-2022-08-26 14:00:50,026 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39845
-2022-08-26 14:00:50,026 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46069
-2022-08-26 14:00:50,026 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39573
-2022-08-26 14:00:50,027 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:50,027 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:00:50,027 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:00:50,027 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-mw2bhwqf
-2022-08-26 14:00:50,027 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:50,211 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:44491', status: init, memory: 0, processing: 0>
-2022-08-26 14:00:50,396 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:44491
-2022-08-26 14:00:50,397 - distributed.core - INFO - Starting established connection
-2022-08-26 14:00:50,397 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39573
-2022-08-26 14:00:50,397 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:50,397 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39845', status: init, memory: 0, processing: 0>
-2022-08-26 14:00:50,398 - distributed.core - INFO - Starting established connection
-2022-08-26 14:00:50,398 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39845
-2022-08-26 14:00:50,398 - distributed.core - INFO - Starting established connection
-2022-08-26 14:00:50,398 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39573
-2022-08-26 14:00:50,398 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:50,399 - distributed.core - INFO - Starting established connection
-2022-08-26 14:00:50,404 - distributed.scheduler - INFO - Receive client connection: Client-2a4ced11-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:00:50,404 - distributed.core - INFO - Starting established connection
-PASSED2022-08-26 14:00:50,416 - distributed.scheduler - INFO - Remove client Client-2a4ced11-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:00:50,416 - distributed.scheduler - INFO - Remove client Client-2a4ced11-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:00:50,417 - distributed.scheduler - INFO - Close client connection: Client-2a4ced11-2582-11ed-a99d-00d861bc4509
-
-distributed/tests/test_as_completed.py::test_as_completed_add 2022-08-26 14:00:51,144 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:00:51,146 - distributed.scheduler - INFO - State start
-2022-08-26 14:00:51,149 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:00:51,149 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:40649
-2022-08-26 14:00:51,149 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:00:51,179 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-d7cq1q9g', purging
-2022-08-26 14:00:51,179 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-mw2bhwqf', purging
-2022-08-26 14:00:51,184 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33601
-2022-08-26 14:00:51,185 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33601
-2022-08-26 14:00:51,185 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36735
-2022-08-26 14:00:51,185 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:40649
-2022-08-26 14:00:51,185 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:51,185 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:00:51,185 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:00:51,185 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-q77yeqlp
-2022-08-26 14:00:51,185 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:51,189 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34425
-2022-08-26 14:00:51,189 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34425
-2022-08-26 14:00:51,189 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46635
-2022-08-26 14:00:51,189 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:40649
-2022-08-26 14:00:51,189 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:51,189 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:00:51,189 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:00:51,189 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ajrg0yv7
-2022-08-26 14:00:51,189 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:51,373 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33601', status: init, memory: 0, processing: 0>
-2022-08-26 14:00:51,556 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33601
-2022-08-26 14:00:51,556 - distributed.core - INFO - Starting established connection
-2022-08-26 14:00:51,556 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:40649
-2022-08-26 14:00:51,556 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:51,557 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:34425', status: init, memory: 0, processing: 0>
-2022-08-26 14:00:51,557 - distributed.core - INFO - Starting established connection
-2022-08-26 14:00:51,557 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:34425
-2022-08-26 14:00:51,557 - distributed.core - INFO - Starting established connection
-2022-08-26 14:00:51,557 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:40649
-2022-08-26 14:00:51,557 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:51,558 - distributed.core - INFO - Starting established connection
-2022-08-26 14:00:51,563 - distributed.scheduler - INFO - Receive client connection: Client-2afdc853-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:00:51,563 - distributed.core - INFO - Starting established connection
-PASSED2022-08-26 14:00:51,650 - distributed.scheduler - INFO - Remove client Client-2afdc853-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:00:51,650 - distributed.scheduler - INFO - Remove client Client-2afdc853-2582-11ed-a99d-00d861bc4509
-
-distributed/tests/test_as_completed.py::test_as_completed_update 2022-08-26 14:00:52,379 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:00:52,382 - distributed.scheduler - INFO - State start
-2022-08-26 14:00:52,384 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:00:52,385 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:34903
-2022-08-26 14:00:52,385 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:00:52,414 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-q77yeqlp', purging
-2022-08-26 14:00:52,415 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-ajrg0yv7', purging
-2022-08-26 14:00:52,420 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33343
-2022-08-26 14:00:52,420 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33343
-2022-08-26 14:00:52,420 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41239
-2022-08-26 14:00:52,420 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34903
-2022-08-26 14:00:52,420 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:52,420 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:00:52,420 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:00:52,420 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-urgohwuf
-2022-08-26 14:00:52,420 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:52,425 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39583
-2022-08-26 14:00:52,425 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39583
-2022-08-26 14:00:52,425 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33327
-2022-08-26 14:00:52,425 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34903
-2022-08-26 14:00:52,425 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:52,425 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:00:52,425 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:00:52,425 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ckf4rjxn
-2022-08-26 14:00:52,425 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:52,618 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39583', status: init, memory: 0, processing: 0>
-2022-08-26 14:00:52,804 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39583
-2022-08-26 14:00:52,805 - distributed.core - INFO - Starting established connection
-2022-08-26 14:00:52,805 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34903
-2022-08-26 14:00:52,805 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:52,805 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33343', status: init, memory: 0, processing: 0>
-2022-08-26 14:00:52,806 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33343
-2022-08-26 14:00:52,806 - distributed.core - INFO - Starting established connection
-2022-08-26 14:00:52,806 - distributed.core - INFO - Starting established connection
-2022-08-26 14:00:52,806 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34903
-2022-08-26 14:00:52,806 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:52,807 - distributed.core - INFO - Starting established connection
-2022-08-26 14:00:52,812 - distributed.scheduler - INFO - Receive client connection: Client-2bbc5b2f-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:00:52,812 - distributed.core - INFO - Starting established connection
-PASSED2022-08-26 14:00:52,857 - distributed.scheduler - INFO - Remove client Client-2bbc5b2f-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:00:52,857 - distributed.scheduler - INFO - Remove client Client-2bbc5b2f-2582-11ed-a99d-00d861bc4509
-
-distributed/tests/test_as_completed.py::test_as_completed_repeats 2022-08-26 14:00:53,585 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:00:53,588 - distributed.scheduler - INFO - State start
-2022-08-26 14:00:53,590 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:00:53,590 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:36305
-2022-08-26 14:00:53,590 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:00:53,620 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-ckf4rjxn', purging
-2022-08-26 14:00:53,621 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-urgohwuf', purging
-2022-08-26 14:00:53,626 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39329
-2022-08-26 14:00:53,626 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39329
-2022-08-26 14:00:53,626 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36747
-2022-08-26 14:00:53,626 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36305
-2022-08-26 14:00:53,626 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:53,626 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:00:53,626 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:00:53,626 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-4pyvsq7l
-2022-08-26 14:00:53,626 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:53,630 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37597
-2022-08-26 14:00:53,630 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37597
-2022-08-26 14:00:53,630 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39697
-2022-08-26 14:00:53,630 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36305
-2022-08-26 14:00:53,630 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:53,630 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:00:53,630 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:00:53,631 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-cqksc0ow
-2022-08-26 14:00:53,631 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:53,823 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37597', status: init, memory: 0, processing: 0>
-2022-08-26 14:00:54,008 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37597
-2022-08-26 14:00:54,008 - distributed.core - INFO - Starting established connection
-2022-08-26 14:00:54,008 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36305
-2022-08-26 14:00:54,008 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:54,009 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39329', status: init, memory: 0, processing: 0>
-2022-08-26 14:00:54,009 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39329
-2022-08-26 14:00:54,009 - distributed.core - INFO - Starting established connection
-2022-08-26 14:00:54,009 - distributed.core - INFO - Starting established connection
-2022-08-26 14:00:54,009 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36305
-2022-08-26 14:00:54,010 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:54,010 - distributed.core - INFO - Starting established connection
-2022-08-26 14:00:54,015 - distributed.scheduler - INFO - Receive client connection: Client-2c740211-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:00:54,016 - distributed.core - INFO - Starting established connection
-PASSED2022-08-26 14:00:54,039 - distributed.scheduler - INFO - Remove client Client-2c740211-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:00:54,039 - distributed.scheduler - INFO - Remove client Client-2c740211-2582-11ed-a99d-00d861bc4509
-
-distributed/tests/test_as_completed.py::test_as_completed_is_empty 2022-08-26 14:00:54,768 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:00:54,771 - distributed.scheduler - INFO - State start
-2022-08-26 14:00:54,773 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:00:54,773 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:44385
-2022-08-26 14:00:54,773 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:00:54,803 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-cqksc0ow', purging
-2022-08-26 14:00:54,803 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-4pyvsq7l', purging
-2022-08-26 14:00:54,808 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40605
-2022-08-26 14:00:54,808 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40605
-2022-08-26 14:00:54,808 - distributed.worker - INFO -          dashboard at:            127.0.0.1:38111
-2022-08-26 14:00:54,808 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44385
-2022-08-26 14:00:54,808 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:54,808 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:00:54,808 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:00:54,808 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-lb0_42sw
-2022-08-26 14:00:54,809 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:54,813 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34847
-2022-08-26 14:00:54,813 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34847
-2022-08-26 14:00:54,813 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42719
-2022-08-26 14:00:54,813 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44385
-2022-08-26 14:00:54,813 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:54,813 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:00:54,813 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:00:54,813 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-kz7cxt_b
-2022-08-26 14:00:54,813 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:54,998 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40605', status: init, memory: 0, processing: 0>
-2022-08-26 14:00:55,185 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40605
-2022-08-26 14:00:55,185 - distributed.core - INFO - Starting established connection
-2022-08-26 14:00:55,185 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44385
-2022-08-26 14:00:55,186 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:55,186 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:34847', status: init, memory: 0, processing: 0>
-2022-08-26 14:00:55,186 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:34847
-2022-08-26 14:00:55,186 - distributed.core - INFO - Starting established connection
-2022-08-26 14:00:55,186 - distributed.core - INFO - Starting established connection
-2022-08-26 14:00:55,187 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44385
-2022-08-26 14:00:55,187 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:55,188 - distributed.core - INFO - Starting established connection
-2022-08-26 14:00:55,193 - distributed.scheduler - INFO - Receive client connection: Client-2d279827-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:00:55,193 - distributed.core - INFO - Starting established connection
-PASSED2022-08-26 14:00:55,215 - distributed.scheduler - INFO - Remove client Client-2d279827-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:00:55,215 - distributed.scheduler - INFO - Remove client Client-2d279827-2582-11ed-a99d-00d861bc4509
-
-distributed/tests/test_as_completed.py::test_as_completed_cancel 2022-08-26 14:00:55,941 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:00:55,943 - distributed.scheduler - INFO - State start
-2022-08-26 14:00:55,946 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:00:55,946 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:40943
-2022-08-26 14:00:55,946 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:00:55,976 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-lb0_42sw', purging
-2022-08-26 14:00:55,976 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-kz7cxt_b', purging
-2022-08-26 14:00:55,981 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39717
-2022-08-26 14:00:55,981 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39717
-2022-08-26 14:00:55,981 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41103
-2022-08-26 14:00:55,981 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:40943
-2022-08-26 14:00:55,981 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:55,981 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:00:55,981 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:00:55,981 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-un4njr_t
-2022-08-26 14:00:55,981 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:55,985 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38423
-2022-08-26 14:00:55,985 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38423
-2022-08-26 14:00:55,985 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36829
-2022-08-26 14:00:55,985 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:40943
-2022-08-26 14:00:55,985 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:55,985 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:00:55,985 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:00:55,985 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-y49v4447
-2022-08-26 14:00:55,985 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:56,168 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39717', status: init, memory: 0, processing: 0>
-2022-08-26 14:00:56,351 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39717
-2022-08-26 14:00:56,351 - distributed.core - INFO - Starting established connection
-2022-08-26 14:00:56,351 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:40943
-2022-08-26 14:00:56,351 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:56,352 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38423', status: init, memory: 0, processing: 0>
-2022-08-26 14:00:56,352 - distributed.core - INFO - Starting established connection
-2022-08-26 14:00:56,352 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38423
-2022-08-26 14:00:56,352 - distributed.core - INFO - Starting established connection
-2022-08-26 14:00:56,353 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:40943
-2022-08-26 14:00:56,353 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:56,354 - distributed.core - INFO - Starting established connection
-2022-08-26 14:00:56,359 - distributed.scheduler - INFO - Receive client connection: Client-2dd99c45-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:00:56,359 - distributed.core - INFO - Starting established connection
-2022-08-26 14:00:56,362 - distributed.scheduler - INFO - Client Client-2dd99c45-2582-11ed-a99d-00d861bc4509 requests to cancel 1 keys
-2022-08-26 14:00:56,463 - distributed.scheduler - INFO - Scheduler cancels key inc-03d935909bba38f9a49655e867cbf56a.  Force=False
-PASSED2022-08-26 14:00:56,577 - distributed.scheduler - INFO - Remove client Client-2dd99c45-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:00:56,577 - distributed.scheduler - INFO - Remove client Client-2dd99c45-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:00:56,577 - distributed.scheduler - INFO - Close client connection: Client-2dd99c45-2582-11ed-a99d-00d861bc4509
-
-distributed/tests/test_as_completed.py::test_as_completed_cancel_last 2022-08-26 14:00:57,308 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:00:57,311 - distributed.scheduler - INFO - State start
-2022-08-26 14:00:57,313 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:00:57,313 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:46091
-2022-08-26 14:00:57,313 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:00:57,343 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-y49v4447', purging
-2022-08-26 14:00:57,343 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-un4njr_t', purging
-2022-08-26 14:00:57,348 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37163
-2022-08-26 14:00:57,348 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37163
-2022-08-26 14:00:57,348 - distributed.worker - INFO -          dashboard at:            127.0.0.1:37109
-2022-08-26 14:00:57,348 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:46091
-2022-08-26 14:00:57,348 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:57,349 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:00:57,349 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:00:57,349 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-kntz1a5d
-2022-08-26 14:00:57,349 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:57,353 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39819
-2022-08-26 14:00:57,353 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39819
-2022-08-26 14:00:57,353 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39217
-2022-08-26 14:00:57,353 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:46091
-2022-08-26 14:00:57,353 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:57,353 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:00:57,353 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:00:57,353 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-pqhz9rby
-2022-08-26 14:00:57,353 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:57,541 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39819', status: init, memory: 0, processing: 0>
-2022-08-26 14:00:57,728 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39819
-2022-08-26 14:00:57,729 - distributed.core - INFO - Starting established connection
-2022-08-26 14:00:57,729 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:46091
-2022-08-26 14:00:57,729 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:57,729 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37163', status: init, memory: 0, processing: 0>
-2022-08-26 14:00:57,730 - distributed.core - INFO - Starting established connection
-2022-08-26 14:00:57,730 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37163
-2022-08-26 14:00:57,730 - distributed.core - INFO - Starting established connection
-2022-08-26 14:00:57,730 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:46091
-2022-08-26 14:00:57,730 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:57,731 - distributed.core - INFO - Starting established connection
-2022-08-26 14:00:57,736 - distributed.scheduler - INFO - Receive client connection: Client-2eabaca3-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:00:57,736 - distributed.core - INFO - Starting established connection
-PASSED2022-08-26 14:00:57,758 - distributed.scheduler - INFO - Remove client Client-2eabaca3-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:00:57,758 - distributed.scheduler - INFO - Remove client Client-2eabaca3-2582-11ed-a99d-00d861bc4509
-
-distributed/tests/test_as_completed.py::test_async_for_py2_equivalent PASSED
-distributed/tests/test_as_completed.py::test_as_completed_error_async 2022-08-26 14:00:58,071 - distributed.worker - WARNING - Compute Failed
-Key:       throws-e7547614a2ac592d36b4a0b751337778
-Function:  throws
-args:      (1)
-kwargs:    {}
-Exception: "RuntimeError('hello!')"
-
-PASSED
-distributed/tests/test_as_completed.py::test_as_completed_error 2022-08-26 14:00:58,992 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:00:58,994 - distributed.scheduler - INFO - State start
-2022-08-26 14:00:58,996 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:00:58,997 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:39501
-2022-08-26 14:00:58,997 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:00:59,004 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42015
-2022-08-26 14:00:59,004 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42015
-2022-08-26 14:00:59,004 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36673
-2022-08-26 14:00:59,004 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39501
-2022-08-26 14:00:59,004 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:59,004 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:00:59,004 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:00:59,004 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-7xcdim6u
-2022-08-26 14:00:59,004 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:59,004 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38367
-2022-08-26 14:00:59,004 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38367
-2022-08-26 14:00:59,004 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42881
-2022-08-26 14:00:59,004 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39501
-2022-08-26 14:00:59,004 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:59,004 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:00:59,004 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:00:59,004 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ymr2djxd
-2022-08-26 14:00:59,005 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:59,196 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42015', status: init, memory: 0, processing: 0>
-2022-08-26 14:00:59,384 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42015
-2022-08-26 14:00:59,384 - distributed.core - INFO - Starting established connection
-2022-08-26 14:00:59,384 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39501
-2022-08-26 14:00:59,384 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:59,385 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38367', status: init, memory: 0, processing: 0>
-2022-08-26 14:00:59,385 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38367
-2022-08-26 14:00:59,385 - distributed.core - INFO - Starting established connection
-2022-08-26 14:00:59,385 - distributed.core - INFO - Starting established connection
-2022-08-26 14:00:59,385 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39501
-2022-08-26 14:00:59,386 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:00:59,386 - distributed.core - INFO - Starting established connection
-2022-08-26 14:00:59,392 - distributed.scheduler - INFO - Receive client connection: Client-2fa84da3-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:00:59,392 - distributed.core - INFO - Starting established connection
-2022-08-26 14:00:59,480 - distributed.worker - WARNING - Compute Failed
-Key:       throws-e7547614a2ac592d36b4a0b751337778
-Function:  throws
-args:      (1)
-kwargs:    {}
-Exception: "RuntimeError('hello!')"
-
-PASSED2022-08-26 14:00:59,493 - distributed.scheduler - INFO - Remove client Client-2fa84da3-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:00:59,493 - distributed.scheduler - INFO - Remove client Client-2fa84da3-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:00:59,493 - distributed.scheduler - INFO - Close client connection: Client-2fa84da3-2582-11ed-a99d-00d861bc4509
-
-distributed/tests/test_as_completed.py::test_as_completed_with_results 2022-08-26 14:01:00,230 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:01:00,233 - distributed.scheduler - INFO - State start
-2022-08-26 14:01:00,235 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:01:00,235 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:43537
-2022-08-26 14:01:00,235 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:01:00,237 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-ymr2djxd', purging
-2022-08-26 14:01:00,238 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-7xcdim6u', purging
-2022-08-26 14:01:00,242 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33763
-2022-08-26 14:01:00,242 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33763
-2022-08-26 14:01:00,242 - distributed.worker - INFO -          dashboard at:            127.0.0.1:37203
-2022-08-26 14:01:00,242 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43537
-2022-08-26 14:01:00,242 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:00,242 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:01:00,242 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:01:00,242 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-bnc0txn6
-2022-08-26 14:01:00,242 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:00,243 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33373
-2022-08-26 14:01:00,243 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33373
-2022-08-26 14:01:00,243 - distributed.worker - INFO -          dashboard at:            127.0.0.1:44923
-2022-08-26 14:01:00,243 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43537
-2022-08-26 14:01:00,243 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:00,243 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:01:00,243 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:01:00,243 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-_yl9l35u
-2022-08-26 14:01:00,243 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:00,457 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33373', status: init, memory: 0, processing: 0>
-2022-08-26 14:01:00,641 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33373
-2022-08-26 14:01:00,641 - distributed.core - INFO - Starting established connection
-2022-08-26 14:01:00,641 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43537
-2022-08-26 14:01:00,641 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:00,642 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33763', status: init, memory: 0, processing: 0>
-2022-08-26 14:01:00,642 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33763
-2022-08-26 14:01:00,642 - distributed.core - INFO - Starting established connection
-2022-08-26 14:01:00,642 - distributed.core - INFO - Starting established connection
-2022-08-26 14:01:00,642 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43537
-2022-08-26 14:01:00,643 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:00,643 - distributed.core - INFO - Starting established connection
-2022-08-26 14:01:00,648 - distributed.scheduler - INFO - Receive client connection: Client-306820b0-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:01:00,648 - distributed.core - INFO - Starting established connection
-2022-08-26 14:01:00,652 - distributed.scheduler - INFO - Client Client-306820b0-2582-11ed-a99d-00d861bc4509 requests to cancel 1 keys
-2022-08-26 14:01:00,745 - distributed.worker - WARNING - Compute Failed
-Key:       throws-e7547614a2ac592d36b4a0b751337778
-Function:  throws
-args:      (1)
-kwargs:    {}
-Exception: "RuntimeError('hello!')"
-
-2022-08-26 14:01:00,752 - distributed.scheduler - INFO - Scheduler cancels key inc-aa9589d43f33371b09a8b12fb0f1f11d.  Force=False
-PASSED2022-08-26 14:01:00,754 - distributed.scheduler - INFO - Remove client Client-306820b0-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:01:00,754 - distributed.scheduler - INFO - Remove client Client-306820b0-2582-11ed-a99d-00d861bc4509
-
-distributed/tests/test_as_completed.py::test_as_completed_with_results_async 2022-08-26 14:01:00,807 - distributed.worker - WARNING - Compute Failed
-Key:       throws-e7547614a2ac592d36b4a0b751337778
-Function:  throws
-args:      (1)
-kwargs:    {}
-Exception: "RuntimeError('hello!')"
-
-PASSED
-distributed/tests/test_as_completed.py::test_as_completed_with_results_no_raise 2022-08-26 14:01:01,819 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:01:01,821 - distributed.scheduler - INFO - State start
-2022-08-26 14:01:01,824 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:01:01,824 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:46109
-2022-08-26 14:01:01,824 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:01:01,831 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42969
-2022-08-26 14:01:01,831 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42969
-2022-08-26 14:01:01,831 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34027
-2022-08-26 14:01:01,831 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:46109
-2022-08-26 14:01:01,831 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:01,831 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:01:01,831 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38811
-2022-08-26 14:01:01,831 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:01:01,831 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-5ppmy5v3
-2022-08-26 14:01:01,831 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38811
-2022-08-26 14:01:01,831 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42595
-2022-08-26 14:01:01,831 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:01,831 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:46109
-2022-08-26 14:01:01,832 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:01,832 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:01:01,832 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:01:01,832 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-sia7pvat
-2022-08-26 14:01:01,832 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:02,025 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38811', status: init, memory: 0, processing: 0>
-2022-08-26 14:01:02,214 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38811
-2022-08-26 14:01:02,214 - distributed.core - INFO - Starting established connection
-2022-08-26 14:01:02,214 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:46109
-2022-08-26 14:01:02,215 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:02,215 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42969', status: init, memory: 0, processing: 0>
-2022-08-26 14:01:02,215 - distributed.core - INFO - Starting established connection
-2022-08-26 14:01:02,215 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42969
-2022-08-26 14:01:02,216 - distributed.core - INFO - Starting established connection
-2022-08-26 14:01:02,216 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:46109
-2022-08-26 14:01:02,216 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:02,217 - distributed.core - INFO - Starting established connection
-2022-08-26 14:01:02,222 - distributed.scheduler - INFO - Receive client connection: Client-31583a6d-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:01:02,222 - distributed.core - INFO - Starting established connection
-2022-08-26 14:01:02,226 - distributed.scheduler - INFO - Client Client-31583a6d-2582-11ed-a99d-00d861bc4509 requests to cancel 1 keys
-2022-08-26 14:01:02,311 - distributed.worker - WARNING - Compute Failed
-Key:       throws-e7547614a2ac592d36b4a0b751337778
-Function:  throws
-args:      (1)
-kwargs:    {}
-Exception: "RuntimeError('hello!')"
-
-2022-08-26 14:01:02,327 - distributed.scheduler - INFO - Scheduler cancels key inc-aa9589d43f33371b09a8b12fb0f1f11d.  Force=False
-PASSED2022-08-26 14:01:02,338 - distributed.scheduler - INFO - Remove client Client-31583a6d-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:01:02,339 - distributed.scheduler - INFO - Remove client Client-31583a6d-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:01:02,339 - distributed.scheduler - INFO - Close client connection: Client-31583a6d-2582-11ed-a99d-00d861bc4509
-
-distributed/tests/test_as_completed.py::test_str PASSED
-distributed/tests/test_as_completed.py::test_as_completed_with_results_no_raise_async 2022-08-26 14:01:02,629 - distributed.worker - WARNING - Compute Failed
-Key:       x
-Function:  throws
-args:      (1)
-kwargs:    {}
-Exception: "RuntimeError('hello!')"
-
-PASSED
-distributed/tests/test_as_completed.py::test_clear PASSED
-distributed/tests/test_asyncprocess.py::test_simple PASSED
-distributed/tests/test_asyncprocess.py::test_exitcode PASSED
-distributed/tests/test_asyncprocess.py::test_sigint_from_same_process PASSED
-distributed/tests/test_asyncprocess.py::test_sigterm_from_parent_process PASSED
-distributed/tests/test_asyncprocess.py::test_terminate PASSED
-distributed/tests/test_asyncprocess.py::test_close PASSED
-distributed/tests/test_asyncprocess.py::test_exit_callback PASSED
-distributed/tests/test_asyncprocess.py::test_child_main_thread PASSED
-distributed/tests/test_asyncprocess.py::test_num_fds PASSED
-distributed/tests/test_asyncprocess.py::test_terminate_after_stop PASSED
-distributed/tests/test_asyncprocess.py::test_kill PASSED
-distributed/tests/test_asyncprocess.py::test_asyncprocess_child_teardown_on_parent_exit PASSED
-distributed/tests/test_batched.py::test_BatchedSend PASSED
-distributed/tests/test_batched.py::test_send_before_start PASSED
-distributed/tests/test_batched.py::test_send_after_stream_start PASSED
-distributed/tests/test_batched.py::test_send_before_close PASSED
-distributed/tests/test_batched.py::test_close_closed PASSED
-distributed/tests/test_batched.py::test_close_not_started PASSED
-distributed/tests/test_batched.py::test_close_twice PASSED
-distributed/tests/test_batched.py::test_stress SKIPPED (need --runsl...)
-distributed/tests/test_batched.py::test_sending_traffic_jam PASSED
-distributed/tests/test_batched.py::test_large_traffic_jam SKIPPED (n...)
-distributed/tests/test_batched.py::test_serializers 2022-08-26 14:01:12,272 - distributed.protocol.core - CRITICAL - Failed to Serialize
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/core.py", line 109, in dumps
-    frames[0] = msgpack.dumps(msg, default=_encode_default, use_bin_type=True)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/msgpack/__init__.py", line 38, in packb
-    return Packer(**kwargs).pack(o)
-  File "msgpack/_packer.pyx", line 294, in msgpack._cmsgpack.Packer.pack
-  File "msgpack/_packer.pyx", line 300, in msgpack._cmsgpack.Packer.pack
-  File "msgpack/_packer.pyx", line 297, in msgpack._cmsgpack.Packer.pack
-  File "msgpack/_packer.pyx", line 264, in msgpack._cmsgpack.Packer._pack
-  File "msgpack/_packer.pyx", line 231, in msgpack._cmsgpack.Packer._pack
-  File "msgpack/_packer.pyx", line 285, in msgpack._cmsgpack.Packer._pack
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/core.py", line 100, in _encode_default
-    frames.extend(create_serialized_sub_frames(obj))
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/core.py", line 60, in create_serialized_sub_frames
-    sub_header, sub_frames = serialize_and_split(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/serialize.py", line 444, in serialize_and_split
-    header, frames = serialize(x, serializers, on_error, context)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/serialize.py", line 266, in serialize
-    return serialize(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/serialize.py", line 366, in serialize
-    raise TypeError(msg, str(x)[:10000])
-TypeError: ('Could not serialize object of type function', '<function test_serializers.<locals>.<lambda> at 0x56403e6cf440>')
-2022-08-26 14:01:12,273 - distributed.comm.utils - ERROR - ('Could not serialize object of type function', '<function test_serializers.<locals>.<lambda> at 0x56403e6cf440>')
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/utils.py", line 55, in _to_frames
-    return list(protocol.dumps(msg, **kwargs))
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/core.py", line 109, in dumps
-    frames[0] = msgpack.dumps(msg, default=_encode_default, use_bin_type=True)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/msgpack/__init__.py", line 38, in packb
-    return Packer(**kwargs).pack(o)
-  File "msgpack/_packer.pyx", line 294, in msgpack._cmsgpack.Packer.pack
-  File "msgpack/_packer.pyx", line 300, in msgpack._cmsgpack.Packer.pack
-  File "msgpack/_packer.pyx", line 297, in msgpack._cmsgpack.Packer.pack
-  File "msgpack/_packer.pyx", line 264, in msgpack._cmsgpack.Packer._pack
-  File "msgpack/_packer.pyx", line 231, in msgpack._cmsgpack.Packer._pack
-  File "msgpack/_packer.pyx", line 285, in msgpack._cmsgpack.Packer._pack
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/core.py", line 100, in _encode_default
-    frames.extend(create_serialized_sub_frames(obj))
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/core.py", line 60, in create_serialized_sub_frames
-    sub_header, sub_frames = serialize_and_split(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/serialize.py", line 444, in serialize_and_split
-    header, frames = serialize(x, serializers, on_error, context)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/serialize.py", line 266, in serialize
-    return serialize(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/serialize.py", line 366, in serialize
-    raise TypeError(msg, str(x)[:10000])
-TypeError: ('Could not serialize object of type function', '<function test_serializers.<locals>.<lambda> at 0x56403e6cf440>')
-2022-08-26 14:01:12,273 - distributed.batched - ERROR - Error in batched write
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/batched.py", line 115, in _background_send
-    nbytes = yield coro
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/tornado/gen.py", line 769, in run
-    value = future.result()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 271, in write
-    frames = await to_frames(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/utils.py", line 72, in to_frames
-    return _to_frames()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/utils.py", line 55, in _to_frames
-    return list(protocol.dumps(msg, **kwargs))
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/core.py", line 109, in dumps
-    frames[0] = msgpack.dumps(msg, default=_encode_default, use_bin_type=True)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/msgpack/__init__.py", line 38, in packb
-    return Packer(**kwargs).pack(o)
-  File "msgpack/_packer.pyx", line 294, in msgpack._cmsgpack.Packer.pack
-  File "msgpack/_packer.pyx", line 300, in msgpack._cmsgpack.Packer.pack
-  File "msgpack/_packer.pyx", line 297, in msgpack._cmsgpack.Packer.pack
-  File "msgpack/_packer.pyx", line 264, in msgpack._cmsgpack.Packer._pack
-  File "msgpack/_packer.pyx", line 231, in msgpack._cmsgpack.Packer._pack
-  File "msgpack/_packer.pyx", line 285, in msgpack._cmsgpack.Packer._pack
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/core.py", line 100, in _encode_default
-    frames.extend(create_serialized_sub_frames(obj))
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/core.py", line 60, in create_serialized_sub_frames
-    sub_header, sub_frames = serialize_and_split(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/serialize.py", line 444, in serialize_and_split
-    header, frames = serialize(x, serializers, on_error, context)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/serialize.py", line 266, in serialize
-    return serialize(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/serialize.py", line 366, in serialize
-    raise TypeError(msg, str(x)[:10000])
-TypeError: ('Could not serialize object of type function', '<function test_serializers.<locals>.<lambda> at 0x56403e6cf440>')
-PASSED
-distributed/tests/test_cancelled_state.py::test_abort_execution_release PASSED
-distributed/tests/test_cancelled_state.py::test_abort_execution_reschedule PASSED
-distributed/tests/test_cancelled_state.py::test_abort_execution_add_as_dependency PASSED
-distributed/tests/test_cancelled_state.py::test_abort_execution_to_fetch PASSED
-distributed/tests/test_cancelled_state.py::test_worker_stream_died_during_comm 2022-08-26 14:01:13,450 - distributed.worker - ERROR - Worker stream died during communication: tcp://127.0.0.1:40273
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 317, in write
-    raise StreamClosedError()
-tornado.iostream.StreamClosedError: Stream is closed
-
-The above exception was the direct cause of the following exception:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 1992, in gather_dep
-    response = await get_data_from_worker(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 2731, in get_data_from_worker
-    return await retry_operation(_get_data, operation="get_data_from_worker")
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/utils_comm.py", line 383, in retry_operation
-    return await retry(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/utils_comm.py", line 368, in retry
-    return await coro()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 2711, in _get_data
-    response = await send_recv(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 917, in send_recv
-    await comm.write(msg, serializers=serializers, on_error="raise")
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/utils_test.py", line 1817, in write
-    return await self.comm.write(msg, serializers=serializers, on_error=on_error)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 328, in write
-    convert_stream_closed_error(self, e)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 144, in convert_stream_closed_error
-    raise CommClosedError(f"in {obj}: {exc}") from exc
-distributed.comm.core.CommClosedError: in <TCP (closed) ConnectionPool local=tcp://127.0.0.1:35228 remote=tcp://127.0.0.1:40273>: Stream is closed
-PASSED
-distributed/tests/test_cancelled_state.py::test_flight_to_executing_via_cancelled_resumed 2022-08-26 14:01:13,814 - distributed.worker - ERROR - Worker stream died during communication: tcp://127.0.0.1:38815
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 225, in read
-    frames_nbytes = await stream.read_bytes(fmt_size)
-tornado.iostream.StreamClosedError: Stream is closed
-
-The above exception was the direct cause of the following exception:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 1992, in gather_dep
-    response = await get_data_from_worker(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 2731, in get_data_from_worker
-    return await retry_operation(_get_data, operation="get_data_from_worker")
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/utils_comm.py", line 383, in retry_operation
-    return await retry(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/utils_comm.py", line 368, in retry
-    return await coro()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 2711, in _get_data
-    response = await send_recv(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 919, in send_recv
-    response = await comm.read(deserializers=deserializers)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 241, in read
-    convert_stream_closed_error(self, e)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 144, in convert_stream_closed_error
-    raise CommClosedError(f"in {obj}: {exc}") from exc
-distributed.comm.core.CommClosedError: in <TCP (closed) Ephemeral Worker->Worker for gather local=tcp://127.0.0.1:59720 remote=tcp://127.0.0.1:38815>: Stream is closed
-PASSED
-distributed/tests/test_cancelled_state.py::test_executing_cancelled_error 2022-08-26 14:01:14,080 - distributed.worker - WARNING - Compute Failed
-Key:       f1
-Function:  wait_and_raise
-args:      ()
-kwargs:    {}
-Exception: 'RuntimeError()'
-
-PASSED
-distributed/tests/test_cancelled_state.py::test_flight_cancelled_error PASSED
-distributed/tests/test_cancelled_state.py::test_in_flight_lost_after_resumed PASSED
-distributed/tests/test_cancelled_state.py::test_cancelled_error 2022-08-26 14:01:14,625 - distributed.worker - WARNING - Compute Failed
-Key:       fut1
-Function:  block_execution
-args:      (<distributed.event.Event object at 0x564040367910>, <distributed.lock.Lock object at 0x56403e6cf4a0>)
-kwargs:    {}
-Exception: 'RuntimeError()'
-
-PASSED
-distributed/tests/test_cancelled_state.py::test_cancelled_error_with_resources 2022-08-26 14:01:14,882 - distributed.worker - WARNING - Compute Failed
-Key:       fut1
-Function:  block_execution
-args:      (<distributed.event.Event object at 0x5640403e4e50>, <distributed.lock.Lock object at 0x56403fcbe560>)
-kwargs:    {}
-Exception: 'RuntimeError()'
-
-PASSED
-distributed/tests/test_cancelled_state.py::test_cancelled_resumed_after_flight_with_dependencies 2022-08-26 14:01:15,118 - distributed.worker - ERROR - Worker stream died during communication: tcp://127.0.0.1:40841
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 225, in read
-    frames_nbytes = await stream.read_bytes(fmt_size)
-tornado.iostream.StreamClosedError: Stream is closed
-
-The above exception was the direct cause of the following exception:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 1992, in gather_dep
-    response = await get_data_from_worker(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 2731, in get_data_from_worker
-    return await retry_operation(_get_data, operation="get_data_from_worker")
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/utils_comm.py", line 383, in retry_operation
-    return await retry(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/utils_comm.py", line 368, in retry
-    return await coro()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 2711, in _get_data
-    response = await send_recv(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 919, in send_recv
-    response = await comm.read(deserializers=deserializers)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 241, in read
-    convert_stream_closed_error(self, e)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 144, in convert_stream_closed_error
-    raise CommClosedError(f"in {obj}: {exc}") from exc
-distributed.comm.core.CommClosedError: in <TCP (closed) Ephemeral Worker->Worker for gather local=tcp://127.0.0.1:42584 remote=tcp://127.0.0.1:40841>: Stream is closed
-PASSED
-distributed/tests/test_cancelled_state.py::test_cancelled_resumed_after_flight_with_dependencies_workerstate PASSED
-distributed/tests/test_cancelled_state.py::test_resumed_cancelled_handle_compute[True-True] 2022-08-26 14:01:15,552 - distributed.worker - WARNING - Compute Failed
-Key:       f3
-Function:  block
-args:      (3)
-kwargs:    {'lock': <distributed.lock.Lock object at 0x5640400da840>, 'enter_event': <distributed.event.Event object at 0x5640401cac50>, 'exit_event': <distributed.event.Event object at 0x56403e7e4940>}
-Exception: "RuntimeError('test error')"
-
-PASSED
-distributed/tests/test_cancelled_state.py::test_resumed_cancelled_handle_compute[True-False] 2022-08-26 14:01:15,962 - distributed.worker - WARNING - Compute Failed
-Key:       f3
-Function:  block
-args:      (3)
-kwargs:    {'lock': <distributed.lock.Lock object at 0x5640404ee540>, 'enter_event': <distributed.event.Event object at 0x56404002bfa0>, 'exit_event': <distributed.event.Event object at 0x56403d890400>}
-Exception: "RuntimeError('test error')"
-
-PASSED
-distributed/tests/test_cancelled_state.py::test_resumed_cancelled_handle_compute[False-True] PASSED
-distributed/tests/test_cancelled_state.py::test_resumed_cancelled_handle_compute[False-False] PASSED
-distributed/tests/test_cancelled_state.py::test_deadlock_cancelled_after_inflight_before_gather_from_worker[False-resumed] PASSED
-distributed/tests/test_cancelled_state.py::test_deadlock_cancelled_after_inflight_before_gather_from_worker[False-cancelled] PASSED
-distributed/tests/test_cancelled_state.py::test_deadlock_cancelled_after_inflight_before_gather_from_worker[True-resumed] 2022-08-26 14:01:18,265 - distributed.worker - ERROR - Worker stream died during communication: tcp://127.0.0.1:33337
-ConnectionRefusedError: [Errno 111] Connection refused
-
-The above exception was the direct cause of the following exception:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/core.py", line 291, in connect
-    comm = await asyncio.wait_for(
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/tasks.py", line 445, in wait_for
-    return fut.result()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 496, in connect
-    convert_stream_closed_error(self, e)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 142, in convert_stream_closed_error
-    raise CommClosedError(f"in {obj}: {exc.__class__.__name__}: {exc}") from exc
-distributed.comm.core.CommClosedError: in <distributed.comm.tcp.TCPConnector object at 0x56403e8876d0>: ConnectionRefusedError: [Errno 111] Connection refused
-
-The above exception was the direct cause of the following exception:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 1992, in gather_dep
-    response = await get_data_from_worker(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 2731, in get_data_from_worker
-    return await retry_operation(_get_data, operation="get_data_from_worker")
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/utils_comm.py", line 383, in retry_operation
-    return await retry(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/utils_comm.py", line 368, in retry
-    return await coro()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 2708, in _get_data
-    comm = await rpc.connect(worker)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 1372, in connect
-    return await connect_attempt
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 1308, in _connect
-    comm = await connect(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/core.py", line 317, in connect
-    raise OSError(
-OSError: Timed out trying to connect to tcp://127.0.0.1:33337 after 0.5 s
-PASSED
-distributed/tests/test_cancelled_state.py::test_deadlock_cancelled_after_inflight_before_gather_from_worker[True-cancelled] PASSED
-distributed/tests/test_cancelled_state.py::test_workerstate_executing_to_executing[executing] PASSED
-distributed/tests/test_cancelled_state.py::test_workerstate_executing_to_executing[long-running] PASSED
-distributed/tests/test_cancelled_state.py::test_workerstate_flight_to_flight PASSED
-distributed/tests/test_cancelled_state.py::test_workerstate_executing_skips_fetch_on_success[executing] PASSED
-distributed/tests/test_cancelled_state.py::test_workerstate_executing_skips_fetch_on_success[long-running] PASSED
-distributed/tests/test_cancelled_state.py::test_workerstate_executing_failure_to_fetch[executing] PASSED
-distributed/tests/test_cancelled_state.py::test_workerstate_executing_failure_to_fetch[long-running] PASSED
-distributed/tests/test_cancelled_state.py::test_workerstate_flight_skips_executing_on_success PASSED
-distributed/tests/test_cancelled_state.py::test_workerstate_flight_failure_to_executing[False] PASSED
-distributed/tests/test_cancelled_state.py::test_workerstate_flight_failure_to_executing[True] PASSED
-distributed/tests/test_cancelled_state.py::test_workerstate_resumed_fetch_to_executing[executing] PASSED
-distributed/tests/test_cancelled_state.py::test_workerstate_resumed_fetch_to_executing[long-running] PASSED
-distributed/tests/test_cancelled_state.py::test_workerstate_resumed_waiting_to_flight PASSED
-distributed/tests/test_cancelled_state.py::test_execute_preamble_early_cancel[executing-False-execute] PASSED
-distributed/tests/test_cancelled_state.py::test_execute_preamble_early_cancel[executing-False-deserialize_task] PASSED
-distributed/tests/test_cancelled_state.py::test_execute_preamble_early_cancel[executing-True-execute] PASSED
-distributed/tests/test_cancelled_state.py::test_execute_preamble_early_cancel[executing-True-deserialize_task] PASSED
-distributed/tests/test_cancelled_state.py::test_execute_preamble_early_cancel[resumed-False-execute] PASSED
-distributed/tests/test_cancelled_state.py::test_execute_preamble_early_cancel[resumed-False-deserialize_task] PASSED
-distributed/tests/test_cancelled_state.py::test_execute_preamble_early_cancel[resumed-True-execute] PASSED
-distributed/tests/test_cancelled_state.py::test_execute_preamble_early_cancel[resumed-True-deserialize_task] PASSED
-distributed/tests/test_cancelled_state.py::test_cancel_with_dependencies_in_memory[ExecuteSuccessEvent-False] PASSED
-distributed/tests/test_cancelled_state.py::test_cancel_with_dependencies_in_memory[ExecuteSuccessEvent-True] PASSED
-distributed/tests/test_cancelled_state.py::test_cancel_with_dependencies_in_memory[ExecuteFailureEvent-False] PASSED
-distributed/tests/test_cancelled_state.py::test_cancel_with_dependencies_in_memory[ExecuteFailureEvent-True] PASSED
-distributed/tests/test_cancelled_state.py::test_cancel_with_dependencies_in_memory[RescheduleEvent-False] PASSED
-distributed/tests/test_cancelled_state.py::test_cancel_with_dependencies_in_memory[RescheduleEvent-True] PASSED
-distributed/tests/test_chaos.py::test_KillWorker[sys.exit] 2022-08-26 14:01:21,403 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37957
-2022-08-26 14:01:21,403 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37957
-2022-08-26 14:01:21,403 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:01:21,403 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36123
-2022-08-26 14:01:21,404 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:46309
-2022-08-26 14:01:21,404 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:21,404 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:01:21,404 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:01:21,404 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-y7pviuvk
-2022-08-26 14:01:21,404 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:21,639 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:46309
-2022-08-26 14:01:21,640 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:21,640 - distributed.core - INFO - Starting established connection
-2022-08-26 14:01:21,656 - distributed.worker - INFO - Starting Worker plugin KillWorker-fd6c3275-c031-4ab5-8dd1-693ed10b84c1
-PASSED
-distributed/tests/test_chaos.py::test_KillWorker[graceful] 2022-08-26 14:01:22,490 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43989
-2022-08-26 14:01:22,490 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43989
-2022-08-26 14:01:22,490 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:01:22,490 - distributed.worker - INFO -          dashboard at:            127.0.0.1:37455
-2022-08-26 14:01:22,490 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44327
-2022-08-26 14:01:22,490 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:22,490 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:01:22,490 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:01:22,490 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ge8zqqq5
-2022-08-26 14:01:22,490 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:22,732 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44327
-2022-08-26 14:01:22,732 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:22,732 - distributed.core - INFO - Starting established connection
-2022-08-26 14:01:22,747 - distributed.worker - INFO - Starting Worker plugin KillWorker-7853ede0-2bb7-4855-af1c-178570e12a2a
-2022-08-26 14:01:22,748 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:43989
-2022-08-26 14:01:22,748 - distributed.worker - INFO - Not waiting on executor to close
-2022-08-26 14:01:22,749 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-0abf12e5-684c-4a1d-8760-68c1e3455b55 Address tcp://127.0.0.1:43989 Status: Status.closing
-2022-08-26 14:01:22,749 - distributed.nanny - INFO - Worker closed
-2022-08-26 14:01:22,750 - distributed.nanny - ERROR - Worker process died unexpectedly
-PASSED
-distributed/tests/test_chaos.py::test_KillWorker[segfault] 2022-08-26 14:01:23,563 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:41313
-2022-08-26 14:01:23,563 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:41313
-2022-08-26 14:01:23,563 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:01:23,563 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46743
-2022-08-26 14:01:23,563 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33987
-2022-08-26 14:01:23,563 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:23,563 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:01:23,563 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:01:23,563 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-j3esm1w7
-2022-08-26 14:01:23,563 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:23,805 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33987
-2022-08-26 14:01:23,805 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:23,806 - distributed.core - INFO - Starting established connection
-2022-08-26 14:01:23,831 - distributed.worker - INFO - Starting Worker plugin KillWorker-d8fc298a-a47b-4858-91e4-91829fe0d2ba
-2022-08-26 14:01:23,839 - distributed.nanny - ERROR - Error in Nanny killing Worker subprocess
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/nanny.py", line 595, in close
-    await self.kill(timeout=timeout)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/nanny.py", line 386, in kill
-    await self.process.kill(timeout=0.8 * (deadline - time()))
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/nanny.py", line 793, in kill
-    self.child_stop_q.close()
-AttributeError: 'NoneType' object has no attribute 'close'
-PASSED
-distributed/tests/test_client.py::test_submit PASSED
-distributed/tests/test_client.py::test_map PASSED
-distributed/tests/test_client.py::test_map_empty PASSED
-distributed/tests/test_client.py::test_map_keynames PASSED
-distributed/tests/test_client.py::test_map_retries 2022-08-26 14:01:25,104 - distributed.worker - WARNING - Compute Failed
-Key:       apply-432613e8d470555b6ef1bd6402a27f16
-Function:  apply
-args:      (<function varying.<locals>.func at 0x56403ff44920>)
-kwargs:    {}
-Exception: "ZeroDivisionError('one')"
-
-2022-08-26 14:01:25,104 - distributed.worker - WARNING - Compute Failed
-Key:       apply-b5535c736e9fc2a14e35afa4da057979
-Function:  apply
-args:      (<function varying.<locals>.func at 0x56403602e9f0>)
-kwargs:    {}
-Exception: "ZeroDivisionError('seven')"
-
-2022-08-26 14:01:25,109 - distributed.worker - WARNING - Compute Failed
-Key:       apply-b5535c736e9fc2a14e35afa4da057979
-Function:  apply
-args:      (<function varying.<locals>.func at 0x56404013bae0>)
-kwargs:    {}
-Exception: "ZeroDivisionError('eight')"
-
-2022-08-26 14:01:25,121 - distributed.worker - WARNING - Compute Failed
-Key:       apply-f42367a7-b498-47ec-ad87-c4087f16bcb0-0
-Function:  apply
-args:      (<function varying.<locals>.func at 0x56403fc3d1d0>)
-kwargs:    {}
-Exception: "ZeroDivisionError('one')"
-
-2022-08-26 14:01:25,122 - distributed.worker - WARNING - Compute Failed
-Key:       apply-f42367a7-b498-47ec-ad87-c4087f16bcb0-2
-Function:  apply
-args:      (<function varying.<locals>.func at 0x56403e73fc10>)
-kwargs:    {}
-Exception: "ZeroDivisionError('seven')"
-
-2022-08-26 14:01:25,126 - distributed.worker - WARNING - Compute Failed
-Key:       apply-f42367a7-b498-47ec-ad87-c4087f16bcb0-2
-Function:  apply
-args:      (<function varying.<locals>.func at 0x56403e5bc800>)
-kwargs:    {}
-Exception: "ZeroDivisionError('eight')"
-
-2022-08-26 14:01:25,141 - distributed.worker - WARNING - Compute Failed
-Key:       apply-5770c26b-1853-4826-a5ae-bb88b96b91ea-0
-Function:  apply
-args:      (<function varying.<locals>.func at 0x56403ff92530>)
-kwargs:    {}
-Exception: "ZeroDivisionError('one')"
-
-2022-08-26 14:01:25,142 - distributed.worker - WARNING - Compute Failed
-Key:       apply-5770c26b-1853-4826-a5ae-bb88b96b91ea-2
-Function:  apply
-args:      (<function varying.<locals>.func at 0x56403dd4d520>)
-kwargs:    {}
-Exception: "ZeroDivisionError('seven')"
-
-PASSED
-distributed/tests/test_client.py::test_map_batch_size PASSED
-distributed/tests/test_client.py::test_custom_key_with_batches PASSED
-distributed/tests/test_client.py::test_compute_retries 2022-08-26 14:01:26,079 - distributed.worker - WARNING - Compute Failed
-Key:       func-931eddcf-3e7a-490f-96f1-42d3ff6e7e1d
-Function:  func
-args:      ()
-kwargs:    {}
-Exception: "ZeroDivisionError('one')"
-
-2022-08-26 14:01:26,089 - distributed.worker - WARNING - Compute Failed
-Key:       func-dcb7c135-262e-4a44-807d-8ec019a73ec8
-Function:  func
-args:      ()
-kwargs:    {}
-Exception: "ZeroDivisionError('one')"
-
-2022-08-26 14:01:26,091 - distributed.worker - WARNING - Compute Failed
-Key:       func-dcb7c135-262e-4a44-807d-8ec019a73ec8
-Function:  func
-args:      ()
-kwargs:    {}
-Exception: "ZeroDivisionError('two')"
-
-2022-08-26 14:01:26,099 - distributed.worker - WARNING - Compute Failed
-Key:       func-e38b20d6-d5df-4ac9-82de-d16b4a821c0c
-Function:  func
-args:      ()
-kwargs:    {}
-Exception: "ZeroDivisionError('one')"
-
-2022-08-26 14:01:26,101 - distributed.worker - WARNING - Compute Failed
-Key:       func-e38b20d6-d5df-4ac9-82de-d16b4a821c0c
-Function:  func
-args:      ()
-kwargs:    {}
-Exception: "ZeroDivisionError('two')"
-
-2022-08-26 14:01:26,112 - distributed.worker - WARNING - Compute Failed
-Key:       func-3d04be1c-bd95-4546-9c5f-941591d1b239
-Function:  func
-args:      ()
-kwargs:    {}
-Exception: "ZeroDivisionError('one')"
-
-2022-08-26 14:01:26,116 - distributed.worker - WARNING - Compute Failed
-Key:       func-3d04be1c-bd95-4546-9c5f-941591d1b239
-Function:  func
-args:      ()
-kwargs:    {}
-Exception: "ZeroDivisionError('two')"
-
-PASSED
-distributed/tests/test_client.py::test_compute_retries_annotations 2022-08-26 14:01:26,352 - distributed.worker - WARNING - Compute Failed
-Key:       func-c1134779-bb3e-4c23-a738-ca72119c7858
-Function:  func
-args:      ()
-kwargs:    {}
-Exception: "ZeroDivisionError('five')"
-
-2022-08-26 14:01:26,353 - distributed.worker - WARNING - Compute Failed
-Key:       func-08a37960-5678-4724-946d-934f65b4c4d3
-Function:  func
-args:      ()
-kwargs:    {}
-Exception: "ZeroDivisionError('one')"
-
-2022-08-26 14:01:26,356 - distributed.worker - WARNING - Compute Failed
-Key:       func-08a37960-5678-4724-946d-934f65b4c4d3
-Function:  func
-args:      ()
-kwargs:    {}
-Exception: "ZeroDivisionError('two')"
-
-2022-08-26 14:01:26,365 - distributed.worker - WARNING - Compute Failed
-Key:       func-7b3e41fa-a002-4929-8f07-674d58436384
-Function:  func
-args:      ()
-kwargs:    {}
-Exception: "ZeroDivisionError('one')"
-
-2022-08-26 14:01:26,370 - distributed.worker - WARNING - Compute Failed
-Key:       func-c6efb620-502e-447d-8995-24ba706c6b4a
-Function:  func
-args:      ()
-kwargs:    {}
-Exception: "ZeroDivisionError('five')"
-
-2022-08-26 14:01:26,374 - distributed.worker - WARNING - Compute Failed
-Key:       func-c6efb620-502e-447d-8995-24ba706c6b4a
-Function:  func
-args:      ()
-kwargs:    {}
-Exception: "ZeroDivisionError('six')"
-
-PASSED
-distributed/tests/test_client.py::test_retries_get 2022-08-26 14:01:27,312 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:01:27,314 - distributed.scheduler - INFO - State start
-2022-08-26 14:01:27,317 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:01:27,317 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:37995
-2022-08-26 14:01:27,317 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:01:27,324 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34639
-2022-08-26 14:01:27,324 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45231
-2022-08-26 14:01:27,324 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34639
-2022-08-26 14:01:27,324 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45231
-2022-08-26 14:01:27,325 - distributed.worker - INFO -          dashboard at:            127.0.0.1:44445
-2022-08-26 14:01:27,325 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36897
-2022-08-26 14:01:27,325 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37995
-2022-08-26 14:01:27,325 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37995
-2022-08-26 14:01:27,325 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:27,325 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:27,325 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:01:27,325 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:01:27,325 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:01:27,325 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:01:27,325 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-c9x1zf1q
-2022-08-26 14:01:27,325 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-p75zxpi4
-2022-08-26 14:01:27,325 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:27,325 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:27,519 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45231', status: init, memory: 0, processing: 0>
-2022-08-26 14:01:27,706 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45231
-2022-08-26 14:01:27,707 - distributed.core - INFO - Starting established connection
-2022-08-26 14:01:27,707 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37995
-2022-08-26 14:01:27,707 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:27,707 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:34639', status: init, memory: 0, processing: 0>
-2022-08-26 14:01:27,708 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:34639
-2022-08-26 14:01:27,708 - distributed.core - INFO - Starting established connection
-2022-08-26 14:01:27,708 - distributed.core - INFO - Starting established connection
-2022-08-26 14:01:27,708 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37995
-2022-08-26 14:01:27,708 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:27,709 - distributed.core - INFO - Starting established connection
-2022-08-26 14:01:27,714 - distributed.scheduler - INFO - Receive client connection: Client-4089fd08-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:01:27,714 - distributed.core - INFO - Starting established connection
-2022-08-26 14:01:27,802 - distributed.worker - WARNING - Compute Failed
-Key:       func-0fd8a3e6-78d8-4d38-bf3e-0d725ebe3d56
-Function:  func
-args:      ()
-kwargs:    {}
-Exception: "ZeroDivisionError('one')"
-
-2022-08-26 14:01:27,877 - distributed.worker - WARNING - Compute Failed
-Key:       func-0fd8a3e6-78d8-4d38-bf3e-0d725ebe3d56
-Function:  func
-args:      ()
-kwargs:    {}
-Exception: "ZeroDivisionError('one')"
-
-2022-08-26 14:01:27,880 - distributed.worker - WARNING - Compute Failed
-Key:       func-0fd8a3e6-78d8-4d38-bf3e-0d725ebe3d56
-Function:  func
-args:      ()
-kwargs:    {}
-Exception: "ZeroDivisionError('two')"
-
-2022-08-26 14:01:27,884 - distributed.worker - WARNING - Compute Failed
-Key:       func-0fd8a3e6-78d8-4d38-bf3e-0d725ebe3d56
-Function:  func
-args:      ()
-kwargs:    {}
-Exception: "ZeroDivisionError('two')"
-
-2022-08-26 14:01:27,896 - distributed.worker - WARNING - Compute Failed
-Key:       func-5527c690-ff02-4b6f-9686-9efe4a754a22
-Function:  func
-args:      ()
-kwargs:    {}
-Exception: "ZeroDivisionError('one')"
-
-PASSED2022-08-26 14:01:27,906 - distributed.scheduler - INFO - Remove client Client-4089fd08-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:01:27,906 - distributed.scheduler - INFO - Remove client Client-4089fd08-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:01:27,906 - distributed.scheduler - INFO - Close client connection: Client-4089fd08-2582-11ed-a99d-00d861bc4509
-
-distributed/tests/test_client.py::test_compute_persisted_retries 2022-08-26 14:01:27,958 - distributed.worker - WARNING - Compute Failed
-Key:       func-4ec7d861-e9d0-4b4f-8d5e-cd21a21e71a1
-Function:  func
-args:      ()
-kwargs:    {}
-Exception: "ZeroDivisionError('one')"
-
-2022-08-26 14:01:27,969 - distributed.worker - WARNING - Compute Failed
-Key:       func-caf5b0ab-c2df-4945-b6eb-c513d3d1b18b
-Function:  func
-args:      ()
-kwargs:    {}
-Exception: "ZeroDivisionError('one')"
-
-2022-08-26 14:01:27,970 - distributed.worker - WARNING - Compute Failed
-Key:       func-caf5b0ab-c2df-4945-b6eb-c513d3d1b18b
-Function:  func
-args:      ()
-kwargs:    {}
-Exception: "ZeroDivisionError('two')"
-
-2022-08-26 14:01:27,980 - distributed.worker - WARNING - Compute Failed
-Key:       func-81cef5a8-788f-4093-afb4-d4d95181698f
-Function:  func
-args:      ()
-kwargs:    {}
-Exception: "ZeroDivisionError('one')"
-
-2022-08-26 14:01:27,982 - distributed.worker - WARNING - Compute Failed
-Key:       func-81cef5a8-788f-4093-afb4-d4d95181698f
-Function:  func
-args:      ()
-kwargs:    {}
-Exception: "ZeroDivisionError('two')"
-
-2022-08-26 14:01:27,992 - distributed.worker - WARNING - Compute Failed
-Key:       func-f47b1bf6-7780-46fc-ade0-1cb8228ded7a
-Function:  func
-args:      ()
-kwargs:    {}
-Exception: "ZeroDivisionError('one')"
-
-2022-08-26 14:01:27,994 - distributed.worker - WARNING - Compute Failed
-Key:       func-f47b1bf6-7780-46fc-ade0-1cb8228ded7a
-Function:  func
-args:      ()
-kwargs:    {}
-Exception: "ZeroDivisionError('two')"
-
-PASSED
-distributed/tests/test_client.py::test_persist_retries 2022-08-26 14:01:28,234 - distributed.worker - WARNING - Compute Failed
-Key:       func-8fb82628-0710-43f9-af2d-9ed828e72f81
-Function:  func
-args:      ()
-kwargs:    {}
-Exception: "ZeroDivisionError('one')"
-
-2022-08-26 14:01:28,236 - distributed.worker - WARNING - Compute Failed
-Key:       func-8fb82628-0710-43f9-af2d-9ed828e72f81
-Function:  func
-args:      ()
-kwargs:    {}
-Exception: "ZeroDivisionError('two')"
-
-2022-08-26 14:01:28,245 - distributed.worker - WARNING - Compute Failed
-Key:       func-611bafb4-93c0-442e-9799-60cec4abbb05
-Function:  func
-args:      ()
-kwargs:    {}
-Exception: "ZeroDivisionError('one')"
-
-2022-08-26 14:01:28,247 - distributed.worker - WARNING - Compute Failed
-Key:       func-611bafb4-93c0-442e-9799-60cec4abbb05
-Function:  func
-args:      ()
-kwargs:    {}
-Exception: "ZeroDivisionError('two')"
-
-PASSED
-distributed/tests/test_client.py::test_persist_retries_annotations 2022-08-26 14:01:28,487 - distributed.worker - WARNING - Compute Failed
-Key:       func-eada5429-77d5-4842-b0eb-7ed74b271726
-Function:  func
-args:      ()
-kwargs:    {}
-Exception: "ZeroDivisionError('one')"
-
-2022-08-26 14:01:28,487 - distributed.worker - WARNING - Compute Failed
-Key:       func-64574652-ec88-4504-b6f4-cc99db480771
-Function:  func
-args:      ()
-kwargs:    {}
-Exception: "ZeroDivisionError('five')"
-
-2022-08-26 14:01:28,491 - distributed.worker - WARNING - Compute Failed
-Key:       func-64574652-ec88-4504-b6f4-cc99db480771
-Function:  func
-args:      ()
-kwargs:    {}
-Exception: "ZeroDivisionError('six')"
-
-PASSED
-distributed/tests/test_client.py::test_retries_dask_array PASSED
-distributed/tests/test_client.py::test_future_repr PASSED
-distributed/tests/test_client.py::test_future_tuple_repr PASSED
-distributed/tests/test_client.py::test_Future_exception 2022-08-26 14:01:29,505 - distributed.worker - WARNING - Compute Failed
-Key:       div-beaac0206246b34d3625d21194e03c13
-Function:  div
-args:      (1, 0)
-kwargs:    {}
-Exception: "ZeroDivisionError('division by zero')"
-
-PASSED
-distributed/tests/test_client.py::test_Future_exception_sync 2022-08-26 14:01:30,457 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:01:30,459 - distributed.scheduler - INFO - State start
-2022-08-26 14:01:30,462 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:01:30,462 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:40941
-2022-08-26 14:01:30,462 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:01:30,469 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34231
-2022-08-26 14:01:30,469 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34231
-2022-08-26 14:01:30,469 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45463
-2022-08-26 14:01:30,469 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:40941
-2022-08-26 14:01:30,469 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:30,469 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:01:30,469 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:01:30,469 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-lf_4xh4u
-2022-08-26 14:01:30,469 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:30,469 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38279
-2022-08-26 14:01:30,469 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38279
-2022-08-26 14:01:30,469 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34053
-2022-08-26 14:01:30,469 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:40941
-2022-08-26 14:01:30,469 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:30,469 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:01:30,469 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:01:30,470 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-nwfppd8s
-2022-08-26 14:01:30,470 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:30,685 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38279', status: init, memory: 0, processing: 0>
-2022-08-26 14:01:30,871 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38279
-2022-08-26 14:01:30,871 - distributed.core - INFO - Starting established connection
-2022-08-26 14:01:30,871 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:40941
-2022-08-26 14:01:30,871 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:30,871 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:34231', status: init, memory: 0, processing: 0>
-2022-08-26 14:01:30,872 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:34231
-2022-08-26 14:01:30,872 - distributed.core - INFO - Starting established connection
-2022-08-26 14:01:30,872 - distributed.core - INFO - Starting established connection
-2022-08-26 14:01:30,872 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:40941
-2022-08-26 14:01:30,872 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:30,873 - distributed.core - INFO - Starting established connection
-2022-08-26 14:01:30,878 - distributed.scheduler - INFO - Receive client connection: Client-426cd9ce-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:01:30,878 - distributed.core - INFO - Starting established connection
-2022-08-26 14:01:30,965 - distributed.worker - WARNING - Compute Failed
-Key:       div-beaac0206246b34d3625d21194e03c13
-Function:  div
-args:      (1, 0)
-kwargs:    {}
-Exception: "ZeroDivisionError('division by zero')"
-
-PASSED2022-08-26 14:01:30,978 - distributed.scheduler - INFO - Remove client Client-426cd9ce-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:01:30,978 - distributed.scheduler - INFO - Remove client Client-426cd9ce-2582-11ed-a99d-00d861bc4509
-
-distributed/tests/test_client.py::test_Future_release 2022-08-26 14:01:31,042 - distributed.worker - WARNING - Compute Failed
-Key:       div-beaac0206246b34d3625d21194e03c13
-Function:  div
-args:      (1, 0)
-kwargs:    {}
-Exception: "ZeroDivisionError('division by zero')"
-
-PASSED
-distributed/tests/test_client.py::test_Future_release_sync 2022-08-26 14:01:32,472 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:01:32,474 - distributed.scheduler - INFO - State start
-2022-08-26 14:01:32,477 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:01:32,477 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:40255
-2022-08-26 14:01:32,477 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:01:32,484 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37769
-2022-08-26 14:01:32,485 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37769
-2022-08-26 14:01:32,485 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42989
-2022-08-26 14:01:32,485 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:40255
-2022-08-26 14:01:32,485 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:32,485 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:01:32,485 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:01:32,485 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-748ul3lh
-2022-08-26 14:01:32,485 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:32,485 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45893
-2022-08-26 14:01:32,485 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45893
-2022-08-26 14:01:32,485 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33355
-2022-08-26 14:01:32,485 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:40255
-2022-08-26 14:01:32,485 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:32,485 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:01:32,485 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:01:32,485 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-xcm4orx4
-2022-08-26 14:01:32,485 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:32,701 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37769', status: init, memory: 0, processing: 0>
-2022-08-26 14:01:32,889 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37769
-2022-08-26 14:01:32,889 - distributed.core - INFO - Starting established connection
-2022-08-26 14:01:32,889 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:40255
-2022-08-26 14:01:32,889 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:32,890 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45893', status: init, memory: 0, processing: 0>
-2022-08-26 14:01:32,890 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45893
-2022-08-26 14:01:32,890 - distributed.core - INFO - Starting established connection
-2022-08-26 14:01:32,890 - distributed.core - INFO - Starting established connection
-2022-08-26 14:01:32,890 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:40255
-2022-08-26 14:01:32,891 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:32,892 - distributed.core - INFO - Starting established connection
-2022-08-26 14:01:32,896 - distributed.scheduler - INFO - Receive client connection: Client-43a0c54c-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:01:32,896 - distributed.core - INFO - Starting established connection
-2022-08-26 14:01:33,091 - distributed.worker - WARNING - Compute Failed
-Key:       div-beaac0206246b34d3625d21194e03c13
-Function:  div
-args:      (1, 0)
-kwargs:    {}
-Exception: "ZeroDivisionError('division by zero')"
-
-PASSED2022-08-26 14:01:33,144 - distributed.scheduler - INFO - Remove client Client-43a0c54c-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:01:33,144 - distributed.scheduler - INFO - Remove client Client-43a0c54c-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:01:33,144 - distributed.scheduler - INFO - Close client connection: Client-43a0c54c-2582-11ed-a99d-00d861bc4509
-
-distributed/tests/test_client.py::test_short_tracebacks 2022-08-26 14:01:33,894 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:01:33,897 - distributed.scheduler - INFO - State start
-2022-08-26 14:01:33,899 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:01:33,899 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:33665
-2022-08-26 14:01:33,899 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:01:33,902 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-xcm4orx4', purging
-2022-08-26 14:01:33,902 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-748ul3lh', purging
-2022-08-26 14:01:33,907 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34327
-2022-08-26 14:01:33,907 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34327
-2022-08-26 14:01:33,907 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34681
-2022-08-26 14:01:33,907 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33665
-2022-08-26 14:01:33,907 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:33,907 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:01:33,907 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39469
-2022-08-26 14:01:33,907 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:01:33,907 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-m68r9jcy
-2022-08-26 14:01:33,907 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39469
-2022-08-26 14:01:33,907 - distributed.worker - INFO -          dashboard at:            127.0.0.1:38327
-2022-08-26 14:01:33,907 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:33,907 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33665
-2022-08-26 14:01:33,907 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:33,907 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:01:33,907 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:01:33,907 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-04lenbc4
-2022-08-26 14:01:33,907 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:34,129 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:34327', status: init, memory: 0, processing: 0>
-2022-08-26 14:01:34,317 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:34327
-2022-08-26 14:01:34,317 - distributed.core - INFO - Starting established connection
-2022-08-26 14:01:34,317 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33665
-2022-08-26 14:01:34,318 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:34,318 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39469', status: init, memory: 0, processing: 0>
-2022-08-26 14:01:34,318 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39469
-2022-08-26 14:01:34,318 - distributed.core - INFO - Starting established connection
-2022-08-26 14:01:34,318 - distributed.core - INFO - Starting established connection
-2022-08-26 14:01:34,319 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33665
-2022-08-26 14:01:34,319 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:34,319 - distributed.core - INFO - Starting established connection
-2022-08-26 14:01:34,324 - distributed.scheduler - INFO - Receive client connection: Client-447ab65e-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:01:34,325 - distributed.core - INFO - Starting established connection
-2022-08-26 14:01:34,412 - distributed.worker - WARNING - Compute Failed
-Key:       div-beaac0206246b34d3625d21194e03c13
-Function:  div
-args:      (1, 0)
-kwargs:    {}
-Exception: "ZeroDivisionError('division by zero')"
-
-PASSED2022-08-26 14:01:34,414 - distributed.scheduler - INFO - Remove client Client-447ab65e-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:01:34,414 - distributed.scheduler - INFO - Remove client Client-447ab65e-2582-11ed-a99d-00d861bc4509
-
-distributed/tests/test_client.py::test_map_naming PASSED
-distributed/tests/test_client.py::test_submit_naming PASSED
-distributed/tests/test_client.py::test_exceptions 2022-08-26 14:01:34,943 - distributed.worker - WARNING - Compute Failed
-Key:       div-beaac0206246b34d3625d21194e03c13
-Function:  div
-args:      (1, 0)
-kwargs:    {}
-Exception: "ZeroDivisionError('division by zero')"
-
-PASSED
-distributed/tests/test_client.py::test_gc PASSED
-distributed/tests/test_client.py::test_thread 2022-08-26 14:01:36,179 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:01:36,181 - distributed.scheduler - INFO - State start
-2022-08-26 14:01:36,184 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:01:36,184 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:35027
-2022-08-26 14:01:36,184 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:01:36,192 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40933
-2022-08-26 14:01:36,192 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40933
-2022-08-26 14:01:36,192 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43911
-2022-08-26 14:01:36,192 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35027
-2022-08-26 14:01:36,192 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:36,192 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:01:36,192 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:01:36,192 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-7uw5_t41
-2022-08-26 14:01:36,192 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:36,192 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35293
-2022-08-26 14:01:36,192 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35293
-2022-08-26 14:01:36,192 - distributed.worker - INFO -          dashboard at:            127.0.0.1:44931
-2022-08-26 14:01:36,192 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35027
-2022-08-26 14:01:36,192 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:36,193 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:01:36,193 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:01:36,193 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-3rje920e
-2022-08-26 14:01:36,193 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:36,404 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:35293', status: init, memory: 0, processing: 0>
-2022-08-26 14:01:36,591 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:35293
-2022-08-26 14:01:36,591 - distributed.core - INFO - Starting established connection
-2022-08-26 14:01:36,591 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35027
-2022-08-26 14:01:36,591 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:36,592 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40933', status: init, memory: 0, processing: 0>
-2022-08-26 14:01:36,592 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40933
-2022-08-26 14:01:36,592 - distributed.core - INFO - Starting established connection
-2022-08-26 14:01:36,592 - distributed.core - INFO - Starting established connection
-2022-08-26 14:01:36,593 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35027
-2022-08-26 14:01:36,593 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:36,594 - distributed.core - INFO - Starting established connection
-2022-08-26 14:01:36,598 - distributed.scheduler - INFO - Receive client connection: Client-45d5a284-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:01:36,598 - distributed.core - INFO - Starting established connection
-PASSED2022-08-26 14:01:36,927 - distributed.scheduler - INFO - Remove client Client-45d5a284-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:01:36,927 - distributed.scheduler - INFO - Remove client Client-45d5a284-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:01:36,927 - distributed.scheduler - INFO - Close client connection: Client-45d5a284-2582-11ed-a99d-00d861bc4509
-
-distributed/tests/test_client.py::test_sync_exceptions 2022-08-26 14:01:37,674 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:01:37,676 - distributed.scheduler - INFO - State start
-2022-08-26 14:01:37,679 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:01:37,679 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:34243
-2022-08-26 14:01:37,679 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:01:37,681 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-7uw5_t41', purging
-2022-08-26 14:01:37,682 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-3rje920e', purging
-2022-08-26 14:01:37,686 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:44977
-2022-08-26 14:01:37,686 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:44977
-2022-08-26 14:01:37,686 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41787
-2022-08-26 14:01:37,686 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34243
-2022-08-26 14:01:37,686 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:37,686 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:01:37,686 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:01:37,686 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-43y_nmb9
-2022-08-26 14:01:37,686 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:37,687 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33069
-2022-08-26 14:01:37,687 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33069
-2022-08-26 14:01:37,687 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34501
-2022-08-26 14:01:37,687 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34243
-2022-08-26 14:01:37,687 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:37,687 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:01:37,687 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:01:37,687 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-z3_uwaji
-2022-08-26 14:01:37,687 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:37,896 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:44977', status: init, memory: 0, processing: 0>
-2022-08-26 14:01:38,085 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:44977
-2022-08-26 14:01:38,086 - distributed.core - INFO - Starting established connection
-2022-08-26 14:01:38,086 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34243
-2022-08-26 14:01:38,086 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:38,086 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33069', status: init, memory: 0, processing: 0>
-2022-08-26 14:01:38,087 - distributed.core - INFO - Starting established connection
-2022-08-26 14:01:38,087 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33069
-2022-08-26 14:01:38,087 - distributed.core - INFO - Starting established connection
-2022-08-26 14:01:38,087 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34243
-2022-08-26 14:01:38,087 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:38,088 - distributed.core - INFO - Starting established connection
-2022-08-26 14:01:38,093 - distributed.scheduler - INFO - Receive client connection: Client-46b9b6ad-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:01:38,093 - distributed.core - INFO - Starting established connection
-2022-08-26 14:01:38,189 - distributed.worker - WARNING - Compute Failed
-Key:       div-ae588753f74edc77d4240b910b0f7ce5
-Function:  div
-args:      (10, 0)
-kwargs:    {}
-Exception: "ZeroDivisionError('division by zero')"
-
-PASSED2022-08-26 14:01:38,203 - distributed.scheduler - INFO - Remove client Client-46b9b6ad-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:01:38,203 - distributed.scheduler - INFO - Remove client Client-46b9b6ad-2582-11ed-a99d-00d861bc4509
-
-distributed/tests/test_client.py::test_gather PASSED
-distributed/tests/test_client.py::test_gather_mismatched_client 2022-08-26 14:01:38,603 - distributed.client - ERROR - 
-ConnectionRefusedError: [Errno 111] Connection refused
-
-The above exception was the direct cause of the following exception:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/core.py", line 291, in connect
-    comm = await asyncio.wait_for(
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/tasks.py", line 445, in wait_for
-    return fut.result()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 496, in connect
-    convert_stream_closed_error(self, e)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 142, in convert_stream_closed_error
-    raise CommClosedError(f"in {obj}: {exc.__class__.__name__}: {exc}") from exc
-distributed.comm.core.CommClosedError: in <distributed.comm.tcp.TCPConnector object at 0x564040b170b0>: ConnectionRefusedError: [Errno 111] Connection refused
-
-During handling of the above exception, another exception occurred:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/utils.py", line 799, in wrapper
-    return await func(*args, **kwargs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/client.py", line 1246, in _reconnect
-    await self._ensure_connected(timeout=timeout)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/client.py", line 1276, in _ensure_connected
-    comm = await connect(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/core.py", line 315, in connect
-    await asyncio.sleep(backoff)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/tasks.py", line 605, in sleep
-    return await future
-asyncio.exceptions.CancelledError
-PASSED
-distributed/tests/test_client.py::test_gather_lost PASSED
-distributed/tests/test_client.py::test_gather_sync 2022-08-26 14:01:39,759 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:01:39,762 - distributed.scheduler - INFO - State start
-2022-08-26 14:01:39,764 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:01:39,764 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:32851
-2022-08-26 14:01:39,764 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:01:39,772 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38497
-2022-08-26 14:01:39,772 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38497
-2022-08-26 14:01:39,772 - distributed.worker - INFO -          dashboard at:            127.0.0.1:38307
-2022-08-26 14:01:39,772 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:32851
-2022-08-26 14:01:39,772 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:39,772 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:01:39,772 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:01:39,772 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-9xjb0694
-2022-08-26 14:01:39,772 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:39,772 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33523
-2022-08-26 14:01:39,773 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33523
-2022-08-26 14:01:39,773 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45331
-2022-08-26 14:01:39,773 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:32851
-2022-08-26 14:01:39,773 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:39,773 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:01:39,773 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:01:39,773 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-mhf3mcj3
-2022-08-26 14:01:39,773 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:39,986 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38497', status: init, memory: 0, processing: 0>
-2022-08-26 14:01:40,178 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38497
-2022-08-26 14:01:40,178 - distributed.core - INFO - Starting established connection
-2022-08-26 14:01:40,178 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:32851
-2022-08-26 14:01:40,179 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:40,179 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33523', status: init, memory: 0, processing: 0>
-2022-08-26 14:01:40,180 - distributed.core - INFO - Starting established connection
-2022-08-26 14:01:40,180 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33523
-2022-08-26 14:01:40,180 - distributed.core - INFO - Starting established connection
-2022-08-26 14:01:40,180 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:32851
-2022-08-26 14:01:40,180 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:40,181 - distributed.core - INFO - Starting established connection
-2022-08-26 14:01:40,185 - distributed.scheduler - INFO - Receive client connection: Client-47f90e66-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:01:40,186 - distributed.core - INFO - Starting established connection
-2022-08-26 14:01:40,284 - distributed.worker - WARNING - Compute Failed
-Key:       div-beaac0206246b34d3625d21194e03c13
-Function:  div
-args:      (1, 0)
-kwargs:    {}
-Exception: "ZeroDivisionError('division by zero')"
-
-PASSED2022-08-26 14:01:40,288 - distributed.scheduler - INFO - Remove client Client-47f90e66-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:01:40,288 - distributed.scheduler - INFO - Remove client Client-47f90e66-2582-11ed-a99d-00d861bc4509
-
-distributed/tests/test_client.py::test_gather_strict 2022-08-26 14:01:40,341 - distributed.worker - WARNING - Compute Failed
-Key:       div-beaac0206246b34d3625d21194e03c13
-Function:  div
-args:      (1, 0)
-kwargs:    {}
-Exception: "ZeroDivisionError('division by zero')"
-
-PASSED
-distributed/tests/test_client.py::test_gather_skip 2022-08-26 14:01:40,578 - distributed.worker - WARNING - Compute Failed
-Key:       div-beaac0206246b34d3625d21194e03c13
-Function:  div
-args:      (1, 0)
-kwargs:    {}
-Exception: "ZeroDivisionError('division by zero')"
-
-PASSED
-distributed/tests/test_client.py::test_limit_concurrent_gathering PASSED
-distributed/tests/test_client.py::test_get PASSED
-distributed/tests/test_client.py::test_get_sync 2022-08-26 14:01:42,608 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:01:42,610 - distributed.scheduler - INFO - State start
-2022-08-26 14:01:42,613 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:01:42,613 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:37403
-2022-08-26 14:01:42,613 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:01:42,620 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37119
-2022-08-26 14:01:42,620 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37119
-2022-08-26 14:01:42,620 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34091
-2022-08-26 14:01:42,620 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36605
-2022-08-26 14:01:42,620 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34091
-2022-08-26 14:01:42,620 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37403
-2022-08-26 14:01:42,620 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43371
-2022-08-26 14:01:42,620 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:42,620 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37403
-2022-08-26 14:01:42,620 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:01:42,620 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:42,620 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:01:42,620 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:01:42,620 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:01:42,620 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-q4esgjgq
-2022-08-26 14:01:42,620 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-yq05wga3
-2022-08-26 14:01:42,620 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:42,620 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:42,817 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37119', status: init, memory: 0, processing: 0>
-2022-08-26 14:01:43,005 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37119
-2022-08-26 14:01:43,006 - distributed.core - INFO - Starting established connection
-2022-08-26 14:01:43,006 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37403
-2022-08-26 14:01:43,006 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:43,006 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:34091', status: init, memory: 0, processing: 0>
-2022-08-26 14:01:43,007 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:34091
-2022-08-26 14:01:43,007 - distributed.core - INFO - Starting established connection
-2022-08-26 14:01:43,007 - distributed.core - INFO - Starting established connection
-2022-08-26 14:01:43,007 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37403
-2022-08-26 14:01:43,007 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:43,008 - distributed.core - INFO - Starting established connection
-2022-08-26 14:01:43,012 - distributed.scheduler - INFO - Receive client connection: Client-49a86aa4-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:01:43,013 - distributed.core - INFO - Starting established connection
-PASSED2022-08-26 14:01:43,036 - distributed.scheduler - INFO - Remove client Client-49a86aa4-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:01:43,036 - distributed.scheduler - INFO - Remove client Client-49a86aa4-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:01:43,036 - distributed.scheduler - INFO - Close client connection: Client-49a86aa4-2582-11ed-a99d-00d861bc4509
-
-distributed/tests/test_client.py::test_no_future_references 2022-08-26 14:01:43,788 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:01:43,790 - distributed.scheduler - INFO - State start
-2022-08-26 14:01:43,793 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:01:43,793 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:36173
-2022-08-26 14:01:43,793 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:01:43,795 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-q4esgjgq', purging
-2022-08-26 14:01:43,795 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-yq05wga3', purging
-2022-08-26 14:01:43,800 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35063
-2022-08-26 14:01:43,800 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35063
-2022-08-26 14:01:43,800 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46219
-2022-08-26 14:01:43,800 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36173
-2022-08-26 14:01:43,800 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:43,800 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:01:43,800 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:01:43,800 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-maiqgyr6
-2022-08-26 14:01:43,800 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:43,800 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43261
-2022-08-26 14:01:43,800 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43261
-2022-08-26 14:01:43,800 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42703
-2022-08-26 14:01:43,800 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36173
-2022-08-26 14:01:43,801 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:43,801 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:01:43,801 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:01:43,801 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-83fzv47d
-2022-08-26 14:01:43,801 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:43,995 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:43261', status: init, memory: 0, processing: 0>
-2022-08-26 14:01:44,182 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:43261
-2022-08-26 14:01:44,182 - distributed.core - INFO - Starting established connection
-2022-08-26 14:01:44,182 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36173
-2022-08-26 14:01:44,183 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:44,183 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:35063', status: init, memory: 0, processing: 0>
-2022-08-26 14:01:44,183 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:35063
-2022-08-26 14:01:44,183 - distributed.core - INFO - Starting established connection
-2022-08-26 14:01:44,183 - distributed.core - INFO - Starting established connection
-2022-08-26 14:01:44,183 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36173
-2022-08-26 14:01:44,184 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:44,185 - distributed.core - INFO - Starting established connection
-2022-08-26 14:01:44,189 - distributed.scheduler - INFO - Receive client connection: Client-4a5bedc4-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:01:44,190 - distributed.core - INFO - Starting established connection
-PASSED2022-08-26 14:01:44,203 - distributed.scheduler - INFO - Remove client Client-4a5bedc4-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:01:44,203 - distributed.scheduler - INFO - Remove client Client-4a5bedc4-2582-11ed-a99d-00d861bc4509
-
-distributed/tests/test_client.py::test_get_sync_optimize_graph_passes_through 2022-08-26 14:01:44,956 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:01:44,958 - distributed.scheduler - INFO - State start
-2022-08-26 14:01:44,960 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:01:44,961 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:43041
-2022-08-26 14:01:44,961 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:01:44,963 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-83fzv47d', purging
-2022-08-26 14:01:44,963 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-maiqgyr6', purging
-2022-08-26 14:01:44,968 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37251
-2022-08-26 14:01:44,968 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37251
-2022-08-26 14:01:44,968 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34281
-2022-08-26 14:01:44,968 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43041
-2022-08-26 14:01:44,968 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:44,968 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:01:44,968 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:01:44,968 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34583
-2022-08-26 14:01:44,968 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-nbzgy9h3
-2022-08-26 14:01:44,968 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34583
-2022-08-26 14:01:44,968 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:44,968 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45333
-2022-08-26 14:01:44,968 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43041
-2022-08-26 14:01:44,968 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:44,968 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:01:44,968 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:01:44,968 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-cp6_lp71
-2022-08-26 14:01:44,968 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:45,158 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:34583', status: init, memory: 0, processing: 0>
-2022-08-26 14:01:45,347 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:34583
-2022-08-26 14:01:45,347 - distributed.core - INFO - Starting established connection
-2022-08-26 14:01:45,347 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43041
-2022-08-26 14:01:45,348 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:45,348 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37251', status: init, memory: 0, processing: 0>
-2022-08-26 14:01:45,348 - distributed.core - INFO - Starting established connection
-2022-08-26 14:01:45,348 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37251
-2022-08-26 14:01:45,348 - distributed.core - INFO - Starting established connection
-2022-08-26 14:01:45,349 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43041
-2022-08-26 14:01:45,349 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:45,350 - distributed.core - INFO - Starting established connection
-2022-08-26 14:01:45,354 - distributed.scheduler - INFO - Receive client connection: Client-4b0db7ec-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:01:45,355 - distributed.core - INFO - Starting established connection
-PASSED2022-08-26 14:01:45,414 - distributed.scheduler - INFO - Remove client Client-4b0db7ec-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:01:45,414 - distributed.scheduler - INFO - Remove client Client-4b0db7ec-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:01:45,414 - distributed.scheduler - INFO - Close client connection: Client-4b0db7ec-2582-11ed-a99d-00d861bc4509
-
-distributed/tests/test_client.py::test_gather_errors 2022-08-26 14:01:45,467 - distributed.worker - WARNING - Compute Failed
-Key:       f-47977ce983fb002d65a408b8e9e055ad
-Function:  f
-args:      (1, 2)
-kwargs:    {}
-Exception: 'TypeError()'
-
-2022-08-26 14:01:45,467 - distributed.worker - WARNING - Compute Failed
-Key:       g-3e0a2f5210309df8d0a55876cf131618
-Function:  g
-args:      (1, 2)
-kwargs:    {}
-Exception: 'AttributeError()'
-
-PASSED
-distributed/tests/test_client.py::test_wait PASSED
-distributed/tests/test_client.py::test_wait_first_completed PASSED
-distributed/tests/test_client.py::test_wait_timeout PASSED
-distributed/tests/test_client.py::test_wait_sync 2022-08-26 14:01:47,433 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:01:47,435 - distributed.scheduler - INFO - State start
-2022-08-26 14:01:47,438 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:01:47,438 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:38561
-2022-08-26 14:01:47,438 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:01:47,445 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37721
-2022-08-26 14:01:47,445 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37721
-2022-08-26 14:01:47,445 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40471
-2022-08-26 14:01:47,445 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:38561
-2022-08-26 14:01:47,445 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:47,445 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:01:47,445 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:01:47,445 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-4oy5xgo5
-2022-08-26 14:01:47,445 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:47,447 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37271
-2022-08-26 14:01:47,447 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37271
-2022-08-26 14:01:47,447 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43321
-2022-08-26 14:01:47,447 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:38561
-2022-08-26 14:01:47,447 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:47,447 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:01:47,447 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:01:47,447 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-oypf_rf8
-2022-08-26 14:01:47,447 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:47,664 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37721', status: init, memory: 0, processing: 0>
-2022-08-26 14:01:47,853 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37721
-2022-08-26 14:01:47,853 - distributed.core - INFO - Starting established connection
-2022-08-26 14:01:47,853 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:38561
-2022-08-26 14:01:47,854 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:47,854 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37271', status: init, memory: 0, processing: 0>
-2022-08-26 14:01:47,855 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37271
-2022-08-26 14:01:47,855 - distributed.core - INFO - Starting established connection
-2022-08-26 14:01:47,855 - distributed.core - INFO - Starting established connection
-2022-08-26 14:01:47,855 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:38561
-2022-08-26 14:01:47,855 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:47,856 - distributed.core - INFO - Starting established connection
-2022-08-26 14:01:47,861 - distributed.scheduler - INFO - Receive client connection: Client-4c8c29de-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:01:47,861 - distributed.core - INFO - Starting established connection
-PASSED2022-08-26 14:01:47,893 - distributed.scheduler - INFO - Remove client Client-4c8c29de-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:01:47,894 - distributed.scheduler - INFO - Remove client Client-4c8c29de-2582-11ed-a99d-00d861bc4509
-
-distributed/tests/test_client.py::test_wait_informative_error_for_timeouts 2022-08-26 14:01:48,646 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:01:48,648 - distributed.scheduler - INFO - State start
-2022-08-26 14:01:48,650 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:01:48,651 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:41359
-2022-08-26 14:01:48,651 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:01:48,653 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-4oy5xgo5', purging
-2022-08-26 14:01:48,653 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-oypf_rf8', purging
-2022-08-26 14:01:48,658 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:41963
-2022-08-26 14:01:48,658 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:41963
-2022-08-26 14:01:48,658 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33279
-2022-08-26 14:01:48,658 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:41359
-2022-08-26 14:01:48,658 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:48,658 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:01:48,658 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:01:48,658 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-a8erizy5
-2022-08-26 14:01:48,658 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:48,661 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:44059
-2022-08-26 14:01:48,661 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:44059
-2022-08-26 14:01:48,661 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36235
-2022-08-26 14:01:48,661 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:41359
-2022-08-26 14:01:48,661 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:48,661 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:01:48,661 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:01:48,661 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-s83r6hzd
-2022-08-26 14:01:48,662 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:48,855 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:41963', status: init, memory: 0, processing: 0>
-2022-08-26 14:01:49,046 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:41963
-2022-08-26 14:01:49,046 - distributed.core - INFO - Starting established connection
-2022-08-26 14:01:49,046 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:41359
-2022-08-26 14:01:49,047 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:49,047 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:44059', status: init, memory: 0, processing: 0>
-2022-08-26 14:01:49,048 - distributed.core - INFO - Starting established connection
-2022-08-26 14:01:49,048 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:44059
-2022-08-26 14:01:49,048 - distributed.core - INFO - Starting established connection
-2022-08-26 14:01:49,048 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:41359
-2022-08-26 14:01:49,048 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:49,049 - distributed.core - INFO - Starting established connection
-2022-08-26 14:01:49,054 - distributed.scheduler - INFO - Receive client connection: Client-4d423954-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:01:49,054 - distributed.core - INFO - Starting established connection
-PASSED2022-08-26 14:01:49,066 - distributed.scheduler - INFO - Remove client Client-4d423954-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:01:49,067 - distributed.scheduler - INFO - Remove client Client-4d423954-2582-11ed-a99d-00d861bc4509
-
-distributed/tests/test_client.py::test_garbage_collection PASSED
-distributed/tests/test_client.py::test_garbage_collection_with_scatter PASSED
-distributed/tests/test_client.py::test_recompute_released_key PASSED
-distributed/tests/test_client.py::test_long_tasks_dont_trigger_timeout SKIPPED
-distributed/tests/test_client.py::test_missing_data_heals SKIPPED (u...)
-distributed/tests/test_client.py::test_gather_robust_to_missing_data SKIPPED
-distributed/tests/test_client.py::test_gather_robust_to_nested_missing_data SKIPPED
-distributed/tests/test_client.py::test_tokenize_on_futures PASSED
-distributed/tests/test_client.py::test_restrictions_submit PASSED
-distributed/tests/test_client.py::test_restrictions_ip_port PASSED
-distributed/tests/test_client.py::test_restrictions_map PASSED
-distributed/tests/test_client.py::test_restrictions_get PASSED
-distributed/tests/test_client.py::test_restrictions_get_annotate PASSED
-distributed/tests/test_client.py::test_remove_worker PASSED
-distributed/tests/test_client.py::test_errors_dont_block 2022-08-26 14:01:51,774 - distributed.worker - WARNING - Compute Failed
-Key:       throws-3b8bc6f73d0cc97be9e4f088a21e64df
-Function:  throws
-args:      (2)
-kwargs:    {}
-Exception: "RuntimeError('hello!')"
-
-2022-08-26 14:01:51,774 - distributed.worker - WARNING - Compute Failed
-Key:       throws-e7547614a2ac592d36b4a0b751337778
-Function:  throws
-args:      (1)
-kwargs:    {}
-Exception: "RuntimeError('hello!')"
-
-PASSED
-distributed/tests/test_client.py::test_submit_quotes PASSED
-distributed/tests/test_client.py::test_map_quotes PASSED
-distributed/tests/test_client.py::test_two_consecutive_clients_share_results PASSED
-distributed/tests/test_client.py::test_submit_then_get_with_Future PASSED
-distributed/tests/test_client.py::test_aliases PASSED
-distributed/tests/test_client.py::test_aliases_2 PASSED
-distributed/tests/test_client.py::test_scatter PASSED
-distributed/tests/test_client.py::test_scatter_types PASSED
-distributed/tests/test_client.py::test_scatter_non_list PASSED
-distributed/tests/test_client.py::test_scatter_tokenize_local PASSED
-distributed/tests/test_client.py::test_scatter_singletons PASSED
-distributed/tests/test_client.py::test_scatter_typename PASSED
-distributed/tests/test_client.py::test_scatter_hash PASSED
-distributed/tests/test_client.py::test_scatter_hash_2 PASSED
-distributed/tests/test_client.py::test_get_releases_data PASSED
-distributed/tests/test_client.py::test_current 2022-08-26 14:01:56,483 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:01:56,485 - distributed.scheduler - INFO - State start
-2022-08-26 14:01:56,489 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:01:56,489 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:34027
-2022-08-26 14:01:56,489 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:01:56,496 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:46849
-2022-08-26 14:01:56,496 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:46849
-2022-08-26 14:01:56,496 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43423
-2022-08-26 14:01:56,496 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34027
-2022-08-26 14:01:56,497 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:56,497 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:01:56,497 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:01:56,497 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ui_og7jy
-2022-08-26 14:01:56,497 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:56,497 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43487
-2022-08-26 14:01:56,497 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43487
-2022-08-26 14:01:56,497 - distributed.worker - INFO -          dashboard at:            127.0.0.1:38865
-2022-08-26 14:01:56,497 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34027
-2022-08-26 14:01:56,497 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:56,497 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:01:56,497 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:01:56,497 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-2pjga6h8
-2022-08-26 14:01:56,497 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:56,693 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:46849', status: init, memory: 0, processing: 0>
-2022-08-26 14:01:56,891 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:46849
-2022-08-26 14:01:56,892 - distributed.core - INFO - Starting established connection
-2022-08-26 14:01:56,891 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34027
-2022-08-26 14:01:56,892 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:56,892 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:43487', status: init, memory: 0, processing: 0>
-2022-08-26 14:01:56,893 - distributed.core - INFO - Starting established connection
-2022-08-26 14:01:56,893 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:43487
-2022-08-26 14:01:56,893 - distributed.core - INFO - Starting established connection
-2022-08-26 14:01:56,893 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34027
-2022-08-26 14:01:56,893 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:56,894 - distributed.core - INFO - Starting established connection
-2022-08-26 14:01:56,899 - distributed.scheduler - INFO - Receive client connection: Client-51ef43f6-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:01:56,899 - distributed.core - INFO - Starting established connection
-2022-08-26 14:01:56,910 - distributed.scheduler - INFO - Remove client Client-51ef43f6-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:01:56,910 - distributed.scheduler - INFO - Remove client Client-51ef43f6-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:01:56,910 - distributed.scheduler - INFO - Close client connection: Client-51ef43f6-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:01:56,913 - distributed.scheduler - INFO - Receive client connection: Client-51f184f2-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:01:56,913 - distributed.core - INFO - Starting established connection
-2022-08-26 14:01:56,925 - distributed.scheduler - INFO - Remove client Client-51f184f2-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:01:56,925 - distributed.scheduler - INFO - Remove client Client-51f184f2-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:01:56,925 - distributed.scheduler - INFO - Close client connection: Client-51f184f2-2582-11ed-a99d-00d861bc4509
-PASSED
-distributed/tests/test_client.py::test_global_clients 2022-08-26 14:01:57,679 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:01:57,681 - distributed.scheduler - INFO - State start
-2022-08-26 14:01:57,685 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:01:57,685 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:41615
-2022-08-26 14:01:57,685 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:01:57,687 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-ui_og7jy', purging
-2022-08-26 14:01:57,688 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-2pjga6h8', purging
-2022-08-26 14:01:57,693 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37299
-2022-08-26 14:01:57,693 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37299
-2022-08-26 14:01:57,693 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46349
-2022-08-26 14:01:57,693 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:41615
-2022-08-26 14:01:57,693 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:57,693 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:01:57,693 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:01:57,693 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-_nufa0f9
-2022-08-26 14:01:57,693 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:57,694 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45883
-2022-08-26 14:01:57,694 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45883
-2022-08-26 14:01:57,694 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35889
-2022-08-26 14:01:57,694 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:41615
-2022-08-26 14:01:57,694 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:57,694 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:01:57,694 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:01:57,694 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ktfed0br
-2022-08-26 14:01:57,694 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:57,890 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45883', status: init, memory: 0, processing: 0>
-2022-08-26 14:01:58,080 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45883
-2022-08-26 14:01:58,080 - distributed.core - INFO - Starting established connection
-2022-08-26 14:01:58,080 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:41615
-2022-08-26 14:01:58,080 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:58,081 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37299', status: init, memory: 0, processing: 0>
-2022-08-26 14:01:58,081 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37299
-2022-08-26 14:01:58,081 - distributed.core - INFO - Starting established connection
-2022-08-26 14:01:58,081 - distributed.core - INFO - Starting established connection
-2022-08-26 14:01:58,081 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:41615
-2022-08-26 14:01:58,082 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:01:58,082 - distributed.core - INFO - Starting established connection
-2022-08-26 14:01:58,086 - distributed.scheduler - INFO - Receive client connection: Client-52a486f5-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:01:58,087 - distributed.core - INFO - Starting established connection
-2022-08-26 14:01:58,090 - distributed.scheduler - INFO - Receive client connection: Client-52a50f8c-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:01:58,090 - distributed.core - INFO - Starting established connection
-2022-08-26 14:01:58,101 - distributed.scheduler - INFO - Remove client Client-52a50f8c-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:01:58,101 - distributed.scheduler - INFO - Remove client Client-52a50f8c-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:01:58,102 - distributed.scheduler - INFO - Close client connection: Client-52a50f8c-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:01:58,102 - distributed.scheduler - INFO - Remove client Client-52a486f5-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:01:58,102 - distributed.scheduler - INFO - Remove client Client-52a486f5-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:01:58,103 - distributed.scheduler - INFO - Close client connection: Client-52a486f5-2582-11ed-a99d-00d861bc4509
-PASSED
-distributed/tests/test_client.py::test_exception_on_exception 2022-08-26 14:01:58,155 - distributed.worker - WARNING - Compute Failed
-Key:       lambda-5dd3fb32d4b43bec4c6d52b66e62b811
-Function:  lambda
-args:      ()
-kwargs:    {}
-Exception: "ZeroDivisionError('division by zero')"
-
-PASSED
-distributed/tests/test_client.py::test_get_task_prefix_states PASSED
-distributed/tests/test_client.py::test_get_nbytes PASSED
-distributed/tests/test_client.py::test_nbytes_determines_worker PASSED
-distributed/tests/test_client.py::test_if_intermediates_clear_on_error 2022-08-26 14:01:59,164 - distributed.worker - WARNING - Compute Failed
-Key:       div-df7eb407c84c692102ae6fb333dc883f
-Function:  div
-args:      (1, 0)
-kwargs:    {}
-Exception: "ZeroDivisionError('division by zero')"
-
-PASSED
-distributed/tests/test_client.py::test_pragmatic_move_small_data_to_large_data PASSED
-distributed/tests/test_client.py::test_get_with_non_list_key PASSED
-distributed/tests/test_client.py::test_get_with_error 2022-08-26 14:01:59,949 - distributed.worker - WARNING - Compute Failed
-Key:       x
-Function:  div
-args:      (1, 0)
-kwargs:    {}
-Exception: "ZeroDivisionError('division by zero')"
-
-PASSED
-distributed/tests/test_client.py::test_get_with_error_sync 2022-08-26 14:02:00,907 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:02:00,909 - distributed.scheduler - INFO - State start
-2022-08-26 14:02:00,912 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:02:00,912 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:35471
-2022-08-26 14:02:00,912 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:02:00,919 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:44249
-2022-08-26 14:02:00,919 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:44249
-2022-08-26 14:02:00,919 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45303
-2022-08-26 14:02:00,919 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35471
-2022-08-26 14:02:00,919 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:00,920 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:02:00,920 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:02:00,920 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-noaoq0_r
-2022-08-26 14:02:00,920 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:00,920 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42241
-2022-08-26 14:02:00,920 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42241
-2022-08-26 14:02:00,920 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35555
-2022-08-26 14:02:00,920 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35471
-2022-08-26 14:02:00,920 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:00,920 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:02:00,920 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:02:00,920 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-1z4wwjsu
-2022-08-26 14:02:00,920 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:01,141 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42241', status: init, memory: 0, processing: 0>
-2022-08-26 14:02:01,332 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42241
-2022-08-26 14:02:01,332 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:01,332 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35471
-2022-08-26 14:02:01,333 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:01,333 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:44249', status: init, memory: 0, processing: 0>
-2022-08-26 14:02:01,333 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:44249
-2022-08-26 14:02:01,333 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:01,333 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:01,334 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35471
-2022-08-26 14:02:01,334 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:01,334 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:01,339 - distributed.scheduler - INFO - Receive client connection: Client-5494d04e-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:02:01,339 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:01,427 - distributed.worker - WARNING - Compute Failed
-Key:       x
-Function:  div
-args:      (1, 0)
-kwargs:    {}
-Exception: "ZeroDivisionError('division by zero')"
-
-PASSED2022-08-26 14:02:01,439 - distributed.scheduler - INFO - Remove client Client-5494d04e-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:02:01,439 - distributed.scheduler - INFO - Remove client Client-5494d04e-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:02:01,440 - distributed.scheduler - INFO - Close client connection: Client-5494d04e-2582-11ed-a99d-00d861bc4509
-
-distributed/tests/test_client.py::test_directed_scatter PASSED
-distributed/tests/test_client.py::test_scatter_direct PASSED
-distributed/tests/test_client.py::test_scatter_direct_2 PASSED
-distributed/tests/test_client.py::test_scatter_direct_numpy PASSED
-distributed/tests/test_client.py::test_scatter_direct_broadcast PASSED
-distributed/tests/test_client.py::test_scatter_direct_balanced PASSED
-distributed/tests/test_client.py::test_scatter_direct_broadcast_target PASSED
-distributed/tests/test_client.py::test_scatter_direct_empty PASSED
-distributed/tests/test_client.py::test_scatter_direct_spread_evenly PASSED
-distributed/tests/test_client.py::test_scatter_gather_sync[True-True] 2022-08-26 14:02:04,535 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:02:04,537 - distributed.scheduler - INFO - State start
-2022-08-26 14:02:04,541 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:02:04,541 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:33455
-2022-08-26 14:02:04,541 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:02:04,548 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38627
-2022-08-26 14:02:04,548 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38627
-2022-08-26 14:02:04,548 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34911
-2022-08-26 14:02:04,548 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33455
-2022-08-26 14:02:04,548 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:04,548 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:02:04,548 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:02:04,548 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-7wo39cp8
-2022-08-26 14:02:04,548 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:04,549 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:41109
-2022-08-26 14:02:04,549 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:41109
-2022-08-26 14:02:04,549 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34475
-2022-08-26 14:02:04,549 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33455
-2022-08-26 14:02:04,549 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:04,549 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:02:04,549 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:02:04,549 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-06n3ud3j
-2022-08-26 14:02:04,549 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:04,747 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:41109', status: init, memory: 0, processing: 0>
-2022-08-26 14:02:04,945 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:41109
-2022-08-26 14:02:04,946 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:04,946 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33455
-2022-08-26 14:02:04,946 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:04,947 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38627', status: init, memory: 0, processing: 0>
-2022-08-26 14:02:04,947 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:04,947 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38627
-2022-08-26 14:02:04,947 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:04,947 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33455
-2022-08-26 14:02:04,948 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:04,948 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:04,953 - distributed.scheduler - INFO - Receive client connection: Client-56bc461c-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:02:04,953 - distributed.core - INFO - Starting established connection
-PASSED2022-08-26 14:02:04,976 - distributed.scheduler - INFO - Remove client Client-56bc461c-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:02:04,976 - distributed.scheduler - INFO - Remove client Client-56bc461c-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:02:04,977 - distributed.scheduler - INFO - Close client connection: Client-56bc461c-2582-11ed-a99d-00d861bc4509
-
-distributed/tests/test_client.py::test_scatter_gather_sync[True-False] 2022-08-26 14:02:05,742 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:02:05,744 - distributed.scheduler - INFO - State start
-2022-08-26 14:02:05,747 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:02:05,747 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:40837
-2022-08-26 14:02:05,747 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:02:05,749 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-06n3ud3j', purging
-2022-08-26 14:02:05,750 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-7wo39cp8', purging
-2022-08-26 14:02:05,754 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45587
-2022-08-26 14:02:05,754 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45587
-2022-08-26 14:02:05,754 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40263
-2022-08-26 14:02:05,754 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:40837
-2022-08-26 14:02:05,754 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:05,755 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:02:05,755 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:02:05,755 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-qik86ug8
-2022-08-26 14:02:05,755 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:05,763 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33623
-2022-08-26 14:02:05,763 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33623
-2022-08-26 14:02:05,763 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33165
-2022-08-26 14:02:05,763 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:40837
-2022-08-26 14:02:05,763 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:05,763 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:02:05,763 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:02:05,763 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-o6l5h8w2
-2022-08-26 14:02:05,764 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:05,960 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33623', status: init, memory: 0, processing: 0>
-2022-08-26 14:02:06,153 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33623
-2022-08-26 14:02:06,153 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:06,153 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:40837
-2022-08-26 14:02:06,154 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:06,154 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45587', status: init, memory: 0, processing: 0>
-2022-08-26 14:02:06,154 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:06,155 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45587
-2022-08-26 14:02:06,155 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:06,155 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:40837
-2022-08-26 14:02:06,155 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:06,156 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:06,160 - distributed.scheduler - INFO - Receive client connection: Client-57747869-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:02:06,161 - distributed.core - INFO - Starting established connection
-PASSED2022-08-26 14:02:06,184 - distributed.scheduler - INFO - Remove client Client-57747869-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:02:06,185 - distributed.scheduler - INFO - Remove client Client-57747869-2582-11ed-a99d-00d861bc4509
-
-distributed/tests/test_client.py::test_scatter_gather_sync[False-True] 2022-08-26 14:02:06,948 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:02:06,950 - distributed.scheduler - INFO - State start
-2022-08-26 14:02:06,953 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:02:06,953 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:40435
-2022-08-26 14:02:06,953 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:02:06,955 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-qik86ug8', purging
-2022-08-26 14:02:06,955 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-o6l5h8w2', purging
-2022-08-26 14:02:06,960 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:41823
-2022-08-26 14:02:06,960 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:41823
-2022-08-26 14:02:06,960 - distributed.worker - INFO -          dashboard at:            127.0.0.1:38687
-2022-08-26 14:02:06,960 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43087
-2022-08-26 14:02:06,960 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:40435
-2022-08-26 14:02:06,961 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43087
-2022-08-26 14:02:06,961 - distributed.worker - INFO -          dashboard at:            127.0.0.1:44071
-2022-08-26 14:02:06,961 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:06,961 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:40435
-2022-08-26 14:02:06,961 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:02:06,961 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:06,961 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:02:06,961 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:02:06,961 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-sl_klavn
-2022-08-26 14:02:06,961 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:02:06,961 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-97vvl1_1
-2022-08-26 14:02:06,961 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:06,961 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:07,158 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:43087', status: init, memory: 0, processing: 0>
-2022-08-26 14:02:07,355 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:43087
-2022-08-26 14:02:07,355 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:07,355 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:40435
-2022-08-26 14:02:07,355 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:07,356 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:41823', status: init, memory: 0, processing: 0>
-2022-08-26 14:02:07,356 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:41823
-2022-08-26 14:02:07,356 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:07,356 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:07,356 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:40435
-2022-08-26 14:02:07,356 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:07,357 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:07,363 - distributed.scheduler - INFO - Receive client connection: Client-582bd8bf-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:02:07,363 - distributed.core - INFO - Starting established connection
-PASSED2022-08-26 14:02:07,385 - distributed.scheduler - INFO - Remove client Client-582bd8bf-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:02:07,385 - distributed.scheduler - INFO - Remove client Client-582bd8bf-2582-11ed-a99d-00d861bc4509
-
-distributed/tests/test_client.py::test_scatter_gather_sync[False-False] 2022-08-26 14:02:08,149 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:02:08,151 - distributed.scheduler - INFO - State start
-2022-08-26 14:02:08,154 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:02:08,154 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:38225
-2022-08-26 14:02:08,154 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:02:08,156 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-sl_klavn', purging
-2022-08-26 14:02:08,156 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-97vvl1_1', purging
-2022-08-26 14:02:08,160 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33807
-2022-08-26 14:02:08,161 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33807
-2022-08-26 14:02:08,161 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42231
-2022-08-26 14:02:08,161 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:38225
-2022-08-26 14:02:08,161 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:08,161 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:02:08,161 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:02:08,161 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-63y_5zdo
-2022-08-26 14:02:08,161 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:08,161 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42765
-2022-08-26 14:02:08,161 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42765
-2022-08-26 14:02:08,161 - distributed.worker - INFO -          dashboard at:            127.0.0.1:37861
-2022-08-26 14:02:08,161 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:38225
-2022-08-26 14:02:08,161 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:08,161 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:02:08,161 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:02:08,161 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-5a7iagx3
-2022-08-26 14:02:08,161 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:08,354 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33807', status: init, memory: 0, processing: 0>
-2022-08-26 14:02:08,544 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33807
-2022-08-26 14:02:08,544 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:08,544 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:38225
-2022-08-26 14:02:08,544 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:08,545 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42765', status: init, memory: 0, processing: 0>
-2022-08-26 14:02:08,545 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:08,545 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42765
-2022-08-26 14:02:08,545 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:08,545 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:38225
-2022-08-26 14:02:08,545 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:08,546 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:08,551 - distributed.scheduler - INFO - Receive client connection: Client-58e13451-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:02:08,551 - distributed.core - INFO - Starting established connection
-PASSED2022-08-26 14:02:08,573 - distributed.scheduler - INFO - Remove client Client-58e13451-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:02:08,573 - distributed.scheduler - INFO - Remove client Client-58e13451-2582-11ed-a99d-00d861bc4509
-
-distributed/tests/test_client.py::test_gather_direct PASSED
-distributed/tests/test_client.py::test_many_submits_spread_evenly PASSED
-distributed/tests/test_client.py::test_traceback 2022-08-26 14:02:09,112 - distributed.worker - WARNING - Compute Failed
-Key:       div-beaac0206246b34d3625d21194e03c13
-Function:  div
-args:      (1, 0)
-kwargs:    {}
-Exception: "ZeroDivisionError('division by zero')"
-
-PASSED
-distributed/tests/test_client.py::test_get_traceback 2022-08-26 14:02:09,365 - distributed.worker - WARNING - Compute Failed
-Key:       x
-Function:  div
-args:      (1, 0)
-kwargs:    {}
-Exception: "ZeroDivisionError('division by zero')"
-
-PASSED
-distributed/tests/test_client.py::test_gather_traceback 2022-08-26 14:02:09,607 - distributed.worker - WARNING - Compute Failed
-Key:       div-beaac0206246b34d3625d21194e03c13
-Function:  div
-args:      (1, 0)
-kwargs:    {}
-Exception: "ZeroDivisionError('division by zero')"
-
-PASSED
-distributed/tests/test_client.py::test_traceback_sync 2022-08-26 14:02:10,572 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:02:10,574 - distributed.scheduler - INFO - State start
-2022-08-26 14:02:10,577 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:02:10,577 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:44479
-2022-08-26 14:02:10,577 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:02:10,584 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34655
-2022-08-26 14:02:10,584 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34655
-2022-08-26 14:02:10,584 - distributed.worker - INFO -          dashboard at:            127.0.0.1:44191
-2022-08-26 14:02:10,584 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44479
-2022-08-26 14:02:10,584 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:10,584 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:02:10,584 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:02:10,584 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-2venuh1g
-2022-08-26 14:02:10,584 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:10,584 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43207
-2022-08-26 14:02:10,584 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43207
-2022-08-26 14:02:10,584 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34897
-2022-08-26 14:02:10,584 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44479
-2022-08-26 14:02:10,584 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:10,584 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:02:10,584 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:02:10,584 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-g3olrilm
-2022-08-26 14:02:10,584 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:10,780 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:34655', status: init, memory: 0, processing: 0>
-2022-08-26 14:02:10,976 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:34655
-2022-08-26 14:02:10,976 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:10,976 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44479
-2022-08-26 14:02:10,977 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:10,977 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:43207', status: init, memory: 0, processing: 0>
-2022-08-26 14:02:10,977 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:43207
-2022-08-26 14:02:10,977 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:10,977 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:10,978 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44479
-2022-08-26 14:02:10,978 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:10,978 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:10,983 - distributed.scheduler - INFO - Receive client connection: Client-5a5462e9-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:02:10,984 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:11,072 - distributed.worker - WARNING - Compute Failed
-Key:       div-beaac0206246b34d3625d21194e03c13
-Function:  div
-args:      (1, 0)
-kwargs:    {}
-Exception: "ZeroDivisionError('division by zero')"
-
-PASSED2022-08-26 14:02:11,096 - distributed.scheduler - INFO - Remove client Client-5a5462e9-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:02:11,096 - distributed.scheduler - INFO - Remove client Client-5a5462e9-2582-11ed-a99d-00d861bc4509
-
-distributed/tests/test_client.py::test_upload_file PASSED
-distributed/tests/test_client.py::test_upload_file_refresh_delayed PASSED
-distributed/tests/test_client.py::test_upload_file_no_extension 2022-08-26 14:02:11,669 - distributed.utils - WARNING - Found nothing to import from myfile
-2022-08-26 14:02:11,671 - distributed.utils - WARNING - Found nothing to import from myfile
-PASSED
-distributed/tests/test_client.py::test_upload_file_zip PASSED
-distributed/tests/test_client.py::test_upload_file_egg SKIPPED (need...)
-distributed/tests/test_client.py::test_upload_large_file PASSED
-distributed/tests/test_client.py::test_upload_file_sync 2022-08-26 14:02:13,146 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:02:13,149 - distributed.scheduler - INFO - State start
-2022-08-26 14:02:13,151 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:02:13,151 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:36653
-2022-08-26 14:02:13,151 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:02:13,160 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43927
-2022-08-26 14:02:13,160 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43927
-2022-08-26 14:02:13,160 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42007
-2022-08-26 14:02:13,161 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36653
-2022-08-26 14:02:13,161 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:13,161 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:02:13,161 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:02:13,161 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-u3wu3jot
-2022-08-26 14:02:13,161 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:13,202 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36763
-2022-08-26 14:02:13,203 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36763
-2022-08-26 14:02:13,203 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43905
-2022-08-26 14:02:13,203 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36653
-2022-08-26 14:02:13,203 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:13,203 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:02:13,203 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:02:13,203 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-5jqiswkd
-2022-08-26 14:02:13,203 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:13,384 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:43927', status: init, memory: 0, processing: 0>
-2022-08-26 14:02:13,580 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:43927
-2022-08-26 14:02:13,580 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:13,581 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36653
-2022-08-26 14:02:13,581 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:13,581 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36763', status: init, memory: 0, processing: 0>
-2022-08-26 14:02:13,582 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36763
-2022-08-26 14:02:13,582 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:13,582 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:13,582 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36653
-2022-08-26 14:02:13,582 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:13,583 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:13,588 - distributed.scheduler - INFO - Receive client connection: Client-5be1d9e8-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:02:13,589 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:13,592 - distributed.worker - INFO - Starting Worker plugin /tmp/myfile.pye1778965-972e-4350-b87c-ea15a904f445
-2022-08-26 14:02:13,592 - distributed.worker - INFO - Starting Worker plugin /tmp/myfile.pye1778965-972e-4350-b87c-ea15a904f445
-2022-08-26 14:02:13,595 - distributed.utils - INFO - Reload module myfile from .py file
-2022-08-26 14:02:13,596 - distributed.utils - INFO - Reload module myfile from .py file
-PASSED2022-08-26 14:02:13,611 - distributed.scheduler - INFO - Remove client Client-5be1d9e8-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:02:13,611 - distributed.scheduler - INFO - Remove client Client-5be1d9e8-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:02:13,611 - distributed.scheduler - INFO - Close client connection: Client-5be1d9e8-2582-11ed-a99d-00d861bc4509
-
-distributed/tests/test_client.py::test_upload_file_exception 2022-08-26 14:02:13,656 - distributed.worker - ERROR - invalid syntax (myfile.py, line 1)
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 1240, in upload_file
-    import_file(out_filename)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/utils.py", line 1122, in import_file
-    loaded.append(importlib.reload(importlib.import_module(name)))
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/importlib/__init__.py", line 169, in reload
-    _bootstrap._exec(spec, module)
-  File "<frozen importlib._bootstrap>", line 619, in _exec
-  File "<frozen importlib._bootstrap_external>", line 879, in exec_module
-  File "<frozen importlib._bootstrap_external>", line 1017, in get_code
-  File "<frozen importlib._bootstrap_external>", line 947, in source_to_code
-  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
-  File "/tmp/dask-worker-space/worker-zovf7mky/myfile.py", line 1
-    syntax-error!
-                ^
-SyntaxError: invalid syntax
-2022-08-26 14:02:13,660 - distributed.worker - ERROR - invalid syntax (myfile.py, line 1)
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 1240, in upload_file
-    import_file(out_filename)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/utils.py", line 1122, in import_file
-    loaded.append(importlib.reload(importlib.import_module(name)))
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/importlib/__init__.py", line 169, in reload
-    _bootstrap._exec(spec, module)
-  File "<frozen importlib._bootstrap>", line 619, in _exec
-  File "<frozen importlib._bootstrap_external>", line 879, in exec_module
-  File "<frozen importlib._bootstrap_external>", line 1017, in get_code
-  File "<frozen importlib._bootstrap_external>", line 947, in source_to_code
-  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
-  File "/tmp/dask-worker-space/worker-rxsk9pv7/myfile.py", line 1
-    syntax-error!
-                ^
-SyntaxError: invalid syntax
-PASSED
-distributed/tests/test_client.py::test_upload_file_exception_sync 2022-08-26 14:02:14,654 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:02:14,657 - distributed.scheduler - INFO - State start
-2022-08-26 14:02:14,661 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:02:14,661 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:33311
-2022-08-26 14:02:14,661 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:02:14,663 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-4afkrw6t', purging
-2022-08-26 14:02:14,664 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-58ht_uvh', purging
-2022-08-26 14:02:14,670 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43169
-2022-08-26 14:02:14,670 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39825
-2022-08-26 14:02:14,670 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43169
-2022-08-26 14:02:14,671 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39825
-2022-08-26 14:02:14,671 - distributed.worker - INFO -          dashboard at:            127.0.0.1:32999
-2022-08-26 14:02:14,671 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41957
-2022-08-26 14:02:14,671 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33311
-2022-08-26 14:02:14,671 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33311
-2022-08-26 14:02:14,671 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:14,671 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:14,671 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:02:14,671 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:02:14,671 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:02:14,671 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:02:14,671 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-co68e1do
-2022-08-26 14:02:14,671 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-efmjqzyr
-2022-08-26 14:02:14,671 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:14,671 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:14,873 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:43169', status: init, memory: 0, processing: 0>
-2022-08-26 14:02:15,075 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:43169
-2022-08-26 14:02:15,075 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:15,075 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33311
-2022-08-26 14:02:15,075 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:15,076 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39825', status: init, memory: 0, processing: 0>
-2022-08-26 14:02:15,076 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:15,076 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39825
-2022-08-26 14:02:15,076 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:15,076 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33311
-2022-08-26 14:02:15,077 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:15,077 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:15,083 - distributed.scheduler - INFO - Receive client connection: Client-5cc5e062-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:02:15,083 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:15,086 - distributed.worker - INFO - Starting Worker plugin /tmp/myfile.pycfcf7a45-0f0f-4561-97a2-a3bb860fbe7e
-2022-08-26 14:02:15,086 - distributed.worker - INFO - Starting Worker plugin /tmp/myfile.pycfcf7a45-0f0f-4561-97a2-a3bb860fbe7e
-2022-08-26 14:02:15,088 - distributed.utils - INFO - Reload module myfile from .py file
-2022-08-26 14:02:15,088 - distributed.worker - ERROR - invalid syntax (myfile.py, line 1)
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 1240, in upload_file
-    import_file(out_filename)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/utils.py", line 1122, in import_file
-    loaded.append(importlib.reload(importlib.import_module(name)))
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/importlib/__init__.py", line 126, in import_module
-    return _bootstrap._gcd_import(name[level:], package, level)
-  File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
-  File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
-  File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
-  File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
-  File "<frozen importlib._bootstrap_external>", line 879, in exec_module
-  File "<frozen importlib._bootstrap_external>", line 1017, in get_code
-  File "<frozen importlib._bootstrap_external>", line 947, in source_to_code
-  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
-  File "/tmp/dask-worker-space/worker-efmjqzyr/myfile.py", line 1
-    syntax-error!
-                ^
-SyntaxError: invalid syntax
-2022-08-26 14:02:15,089 - distributed.utils - INFO - Reload module myfile from .py file
-2022-08-26 14:02:15,090 - distributed.worker - ERROR - invalid syntax (myfile.py, line 1)
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 1240, in upload_file
-    import_file(out_filename)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/utils.py", line 1122, in import_file
-    loaded.append(importlib.reload(importlib.import_module(name)))
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/importlib/__init__.py", line 126, in import_module
-    return _bootstrap._gcd_import(name[level:], package, level)
-  File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
-  File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
-  File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
-  File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
-  File "<frozen importlib._bootstrap_external>", line 879, in exec_module
-  File "<frozen importlib._bootstrap_external>", line 1017, in get_code
-  File "<frozen importlib._bootstrap_external>", line 947, in source_to_code
-  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
-  File "/tmp/dask-worker-space/worker-co68e1do/myfile.py", line 1
-    syntax-error!
-                ^
-SyntaxError: invalid syntax
-PASSED2022-08-26 14:02:15,175 - distributed.scheduler - INFO - Remove client Client-5cc5e062-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:02:15,175 - distributed.scheduler - INFO - Remove client Client-5cc5e062-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:02:15,175 - distributed.scheduler - INFO - Close client connection: Client-5cc5e062-2582-11ed-a99d-00d861bc4509
-
-distributed/tests/test_client.py::test_upload_file_new_worker PASSED
-distributed/tests/test_client.py::test_multiple_clients SKIPPED (unc...)
-distributed/tests/test_client.py::test_async_compute PASSED
-distributed/tests/test_client.py::test_async_compute_with_scatter PASSED
-distributed/tests/test_client.py::test_sync_compute 2022-08-26 14:02:16,730 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/node.py:183: UserWarning: Port 8787 is already in use.
-Perhaps you already have a cluster running?
-Hosting the HTTP server on port 45681 instead
-  warnings.warn(
-2022-08-26 14:02:16,732 - distributed.scheduler - INFO - State start
-2022-08-26 14:02:16,735 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:02:16,735 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:37241
-2022-08-26 14:02:16,735 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:45681
-2022-08-26 14:02:16,744 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36033
-2022-08-26 14:02:16,744 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35357
-2022-08-26 14:02:16,744 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35357
-2022-08-26 14:02:16,744 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36033
-2022-08-26 14:02:16,744 - distributed.worker - INFO -          dashboard at:            127.0.0.1:38449
-2022-08-26 14:02:16,744 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33479
-2022-08-26 14:02:16,744 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37241
-2022-08-26 14:02:16,744 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37241
-2022-08-26 14:02:16,744 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:16,744 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:16,744 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:02:16,744 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:02:16,744 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:02:16,744 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:02:16,744 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-5bnii_co
-2022-08-26 14:02:16,744 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-mlzic2no
-2022-08-26 14:02:16,744 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:16,744 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:16,943 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36033', status: init, memory: 0, processing: 0>
-2022-08-26 14:02:17,153 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36033
-2022-08-26 14:02:17,153 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:17,153 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37241
-2022-08-26 14:02:17,153 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:17,154 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:17,154 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:35357', status: init, memory: 0, processing: 0>
-2022-08-26 14:02:17,154 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:35357
-2022-08-26 14:02:17,154 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:17,155 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37241
-2022-08-26 14:02:17,155 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:17,156 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:17,161 - distributed.scheduler - INFO - Receive client connection: Client-5e02faa7-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:02:17,161 - distributed.core - INFO - Starting established connection
-PASSED2022-08-26 14:02:17,199 - distributed.scheduler - INFO - Remove client Client-5e02faa7-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:02:17,199 - distributed.scheduler - INFO - Remove client Client-5e02faa7-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:02:17,199 - distributed.scheduler - INFO - Close client connection: Client-5e02faa7-2582-11ed-a99d-00d861bc4509
-
-distributed/tests/test_client.py::test_remote_scatter_gather PASSED
-distributed/tests/test_client.py::test_remote_submit_on_Future PASSED
-distributed/tests/test_client.py::test_start_is_idempotent 2022-08-26 14:02:18,513 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/node.py:183: UserWarning: Port 8787 is already in use.
-Perhaps you already have a cluster running?
-Hosting the HTTP server on port 41395 instead
-  warnings.warn(
-2022-08-26 14:02:18,516 - distributed.scheduler - INFO - State start
-2022-08-26 14:02:18,519 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:02:18,519 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:33215
-2022-08-26 14:02:18,519 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:41395
-2022-08-26 14:02:18,528 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38515
-2022-08-26 14:02:18,528 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38515
-2022-08-26 14:02:18,528 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33331
-2022-08-26 14:02:18,528 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33215
-2022-08-26 14:02:18,528 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:18,528 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:02:18,528 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:02:18,528 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-njrmz9cl
-2022-08-26 14:02:18,528 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:18,530 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:44445
-2022-08-26 14:02:18,530 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:44445
-2022-08-26 14:02:18,530 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41813
-2022-08-26 14:02:18,530 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33215
-2022-08-26 14:02:18,530 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:18,530 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:02:18,530 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:02:18,530 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-07j5__gf
-2022-08-26 14:02:18,530 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:18,729 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38515', status: init, memory: 0, processing: 0>
-2022-08-26 14:02:18,929 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38515
-2022-08-26 14:02:18,929 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:18,929 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33215
-2022-08-26 14:02:18,929 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:18,929 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:44445', status: init, memory: 0, processing: 0>
-2022-08-26 14:02:18,930 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:18,930 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:44445
-2022-08-26 14:02:18,930 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:18,930 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33215
-2022-08-26 14:02:18,930 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:18,931 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:18,936 - distributed.scheduler - INFO - Receive client connection: Client-5f11d5be-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:02:18,937 - distributed.core - INFO - Starting established connection
-PASSED2022-08-26 14:02:18,959 - distributed.scheduler - INFO - Remove client Client-5f11d5be-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:02:18,959 - distributed.scheduler - INFO - Remove client Client-5f11d5be-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:02:18,959 - distributed.scheduler - INFO - Close client connection: Client-5f11d5be-2582-11ed-a99d-00d861bc4509
-
-distributed/tests/test_client.py::test_client_with_scheduler PASSED
-distributed/tests/test_client.py::test_allow_restrictions PASSED
-distributed/tests/test_client.py::test_bad_address PASSED
-distributed/tests/test_client.py::test_informative_error_on_cluster_type PASSED
-distributed/tests/test_client.py::test_long_error 2022-08-26 14:02:19,815 - distributed.worker - WARNING - Compute Failed
-Key:       bad-975597fb8bafe700e5afc1638565b946
-Function:  bad
-args:      (10)
-kwargs:    {}
-Exception: "ValueError('Long error message', 'aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
 aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
 aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
 aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
 aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
 aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
 aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
 aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
 aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
 aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa')"
-
-PASSED
-distributed/tests/test_client.py::test_map_on_futures_with_kwargs PASSED
-distributed/tests/test_client.py::test_badly_serialized_input 2022-08-26 14:02:20,347 - distributed.worker - ERROR - Could not deserialize task inc-6591a613cdc36aaa6b0c740fda71a86c
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 2161, in execute
-    function, args, kwargs = await self._maybe_deserialize_task(ts)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 2134, in _maybe_deserialize_task
-    function, args, kwargs = _deserialize(*ts.run_spec)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 2757, in _deserialize
-    args = pickle.loads(args)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/pickle.py", line 73, in loads
-    return pickle.loads(x)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/tests/test_client.py", line 1971, in __setstate__
-    raise TypeError("hello!")
-TypeError: hello!
-PASSED
-distributed/tests/test_client.py::test_badly_serialized_input_stderr SKIPPED
-distributed/tests/test_client.py::test_repr 2022-08-26 14:02:21,426 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/node.py:183: UserWarning: Port 8787 is already in use.
-Perhaps you already have a cluster running?
-Hosting the HTTP server on port 40103 instead
-  warnings.warn(
-2022-08-26 14:02:21,429 - distributed.scheduler - INFO - State start
-2022-08-26 14:02:21,432 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:02:21,432 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:38113
-2022-08-26 14:02:21,432 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:40103
-2022-08-26 14:02:21,441 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40437
-2022-08-26 14:02:21,441 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43155
-2022-08-26 14:02:21,441 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40437
-2022-08-26 14:02:21,441 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43155
-2022-08-26 14:02:21,441 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34161
-2022-08-26 14:02:21,441 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40439
-2022-08-26 14:02:21,441 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:38113
-2022-08-26 14:02:21,441 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:38113
-2022-08-26 14:02:21,441 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:21,441 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:21,441 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:02:21,441 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:02:21,441 - distributed.worker - INFO -                Memory:                   2.00 GiB
-2022-08-26 14:02:21,441 - distributed.worker - INFO -                Memory:                   2.00 GiB
-2022-08-26 14:02:21,441 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-7_4ua7m7
-2022-08-26 14:02:21,441 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-chkfwuiq
-2022-08-26 14:02:21,441 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:21,441 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:21,461 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:46347
-2022-08-26 14:02:21,461 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:46347
-2022-08-26 14:02:21,461 - distributed.worker - INFO -          dashboard at:            127.0.0.1:38091
-2022-08-26 14:02:21,461 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:38113
-2022-08-26 14:02:21,461 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:21,461 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:02:21,461 - distributed.worker - INFO -                Memory:                   2.00 GiB
-2022-08-26 14:02:21,461 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-uoqa3uyd
-2022-08-26 14:02:21,461 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:21,689 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40437', status: init, memory: 0, processing: 0>
-2022-08-26 14:02:21,894 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40437
-2022-08-26 14:02:21,894 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:21,894 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:38113
-2022-08-26 14:02:21,895 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:21,895 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:46347', status: init, memory: 0, processing: 0>
-2022-08-26 14:02:21,895 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:21,895 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:46347
-2022-08-26 14:02:21,896 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:21,896 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:43155', status: init, memory: 0, processing: 0>
-2022-08-26 14:02:21,896 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:38113
-2022-08-26 14:02:21,896 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:21,896 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:43155
-2022-08-26 14:02:21,896 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:21,896 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:38113
-2022-08-26 14:02:21,897 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:21,897 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:21,897 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:21,903 - distributed.scheduler - INFO - Receive client connection: Client-60d68728-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:02:21,904 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:21,928 - distributed.scheduler - INFO - Remove client Client-60d68728-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:02:21,928 - distributed.scheduler - INFO - Remove client Client-60d68728-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:02:21,928 - distributed.scheduler - INFO - Close client connection: Client-60d68728-2582-11ed-a99d-00d861bc4509
-PASSED
-distributed/tests/test_client.py::test_repr_async PASSED
-distributed/tests/test_client.py::test_repr_no_memory_limit PASSED
-distributed/tests/test_client.py::test_repr_localcluster PASSED
-distributed/tests/test_client.py::test_forget_simple PASSED
-distributed/tests/test_client.py::test_forget_complex PASSED
-distributed/tests/test_client.py::test_forget_in_flight PASSED
-distributed/tests/test_client.py::test_forget_errors 2022-08-26 14:02:23,414 - distributed.worker - WARNING - Compute Failed
-Key:       div-beaac0206246b34d3625d21194e03c13
-Function:  div
-args:      (1, 0)
-kwargs:    {}
-Exception: "ZeroDivisionError('division by zero')"
-
-PASSED
-distributed/tests/test_client.py::test_repr_sync 2022-08-26 14:02:24,404 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:02:24,406 - distributed.scheduler - INFO - State start
-2022-08-26 14:02:24,409 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:02:24,409 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:44505
-2022-08-26 14:02:24,409 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:02:24,418 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33357
-2022-08-26 14:02:24,418 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33357
-2022-08-26 14:02:24,418 - distributed.worker - INFO -          dashboard at:            127.0.0.1:44129
-2022-08-26 14:02:24,418 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44505
-2022-08-26 14:02:24,418 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:24,418 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:02:24,418 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:02:24,418 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-bfqnjfk7
-2022-08-26 14:02:24,418 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:24,421 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35985
-2022-08-26 14:02:24,421 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35985
-2022-08-26 14:02:24,421 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34995
-2022-08-26 14:02:24,421 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44505
-2022-08-26 14:02:24,421 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:24,422 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:02:24,422 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:02:24,422 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-jqfs9t3d
-2022-08-26 14:02:24,422 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:24,628 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:35985', status: init, memory: 0, processing: 0>
-2022-08-26 14:02:24,830 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:35985
-2022-08-26 14:02:24,831 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:24,831 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44505
-2022-08-26 14:02:24,831 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:24,831 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33357', status: init, memory: 0, processing: 0>
-2022-08-26 14:02:24,832 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:24,832 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33357
-2022-08-26 14:02:24,832 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:24,832 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44505
-2022-08-26 14:02:24,833 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:24,833 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:24,839 - distributed.scheduler - INFO - Receive client connection: Client-629673eb-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:02:24,839 - distributed.core - INFO - Starting established connection
-PASSED2022-08-26 14:02:24,851 - distributed.scheduler - INFO - Remove client Client-629673eb-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:02:24,851 - distributed.scheduler - INFO - Remove client Client-629673eb-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:02:24,851 - distributed.scheduler - INFO - Close client connection: Client-629673eb-2582-11ed-a99d-00d861bc4509
-
-distributed/tests/test_client.py::test_multi_client PASSED
-distributed/tests/test_client.py::test_cleanup_after_broken_client_connection PASSED
-distributed/tests/test_client.py::test_multi_garbage_collection PASSED
-distributed/tests/test_client.py::test__broadcast PASSED
-distributed/tests/test_client.py::test__broadcast_integer PASSED
-distributed/tests/test_client.py::test__broadcast_dict PASSED
-distributed/tests/test_client.py::test_proxy PASSED
-distributed/tests/test_client.py::test_cancel PASSED
-distributed/tests/test_client.py::test_cancel_tuple_key PASSED
-distributed/tests/test_client.py::test_cancel_multi_client PASSED
-distributed/tests/test_client.py::test_cancel_before_known_to_scheduler PASSED
-distributed/tests/test_client.py::test_cancel_collection PASSED
-distributed/tests/test_client.py::test_cancel_sync 2022-08-26 14:02:31,200 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/node.py:183: UserWarning: Port 8787 is already in use.
-Perhaps you already have a cluster running?
-Hosting the HTTP server on port 33577 instead
-  warnings.warn(
-2022-08-26 14:02:31,203 - distributed.scheduler - INFO - State start
-2022-08-26 14:02:31,205 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:02:31,206 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:39137
-2022-08-26 14:02:31,206 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:33577
-2022-08-26 14:02:31,214 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:46723
-2022-08-26 14:02:31,215 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:46723
-2022-08-26 14:02:31,215 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33245
-2022-08-26 14:02:31,215 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45861
-2022-08-26 14:02:31,215 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39137
-2022-08-26 14:02:31,215 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:31,215 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45861
-2022-08-26 14:02:31,215 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:02:31,215 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:02:31,215 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33679
-2022-08-26 14:02:31,215 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-kfkgzmfs
-2022-08-26 14:02:31,215 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39137
-2022-08-26 14:02:31,215 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:31,215 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:31,215 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:02:31,215 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:02:31,215 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-zq92p512
-2022-08-26 14:02:31,215 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:31,416 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:46723', status: init, memory: 0, processing: 0>
-2022-08-26 14:02:31,617 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:46723
-2022-08-26 14:02:31,618 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:31,618 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39137
-2022-08-26 14:02:31,618 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:31,618 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45861', status: init, memory: 0, processing: 0>
-2022-08-26 14:02:31,619 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:31,619 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45861
-2022-08-26 14:02:31,619 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:31,619 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39137
-2022-08-26 14:02:31,619 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:31,620 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:31,625 - distributed.scheduler - INFO - Receive client connection: Client-66a20622-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:02:31,626 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:31,629 - distributed.scheduler - INFO - Client Client-66a20622-2582-11ed-a99d-00d861bc4509 requests to cancel 1 keys
-2022-08-26 14:02:31,730 - distributed.scheduler - INFO - Scheduler cancels key z.  Force=False
-2022-08-26 14:02:31,730 - distributed.scheduler - INFO - Scheduler cancels key y.  Force=False
-2022-08-26 14:02:31,735 - distributed.scheduler - INFO - Client Client-66a20622-2582-11ed-a99d-00d861bc4509 requests to cancel 1 keys
-PASSED2022-08-26 14:02:32,756 - distributed.scheduler - INFO - Remove client Client-66a20622-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:02:32,757 - distributed.scheduler - INFO - Remove client Client-66a20622-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:02:32,757 - distributed.scheduler - INFO - Close client connection: Client-66a20622-2582-11ed-a99d-00d861bc4509
-
-distributed/tests/test_client.py::test_future_type PASSED
-distributed/tests/test_client.py::test_traceback_clean 2022-08-26 14:02:33,057 - distributed.worker - WARNING - Compute Failed
-Key:       div-beaac0206246b34d3625d21194e03c13
-Function:  div
-args:      (1, 0)
-kwargs:    {}
-Exception: "ZeroDivisionError('division by zero')"
-
-PASSED
-distributed/tests/test_client.py::test_map_differnet_lengths PASSED
-distributed/tests/test_client.py::test_Future_exception_sync_2 2022-08-26 14:02:34,297 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/node.py:183: UserWarning: Port 8787 is already in use.
-Perhaps you already have a cluster running?
-Hosting the HTTP server on port 39619 instead
-  warnings.warn(
-2022-08-26 14:02:34,299 - distributed.scheduler - INFO - State start
-2022-08-26 14:02:34,302 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:02:34,302 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:43923
-2022-08-26 14:02:34,302 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:39619
-2022-08-26 14:02:34,314 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:44103
-2022-08-26 14:02:34,314 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:44103
-2022-08-26 14:02:34,314 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41599
-2022-08-26 14:02:34,314 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38451
-2022-08-26 14:02:34,314 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43923
-2022-08-26 14:02:34,314 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:34,314 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38451
-2022-08-26 14:02:34,314 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:02:34,314 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:02:34,314 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42823
-2022-08-26 14:02:34,314 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-46zj6r6q
-2022-08-26 14:02:34,314 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43923
-2022-08-26 14:02:34,315 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:34,315 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:34,315 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:02:34,315 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:02:34,315 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-kfvz77wt
-2022-08-26 14:02:34,315 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:34,521 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38451', status: init, memory: 0, processing: 0>
-2022-08-26 14:02:34,719 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38451
-2022-08-26 14:02:34,719 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:34,719 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43923
-2022-08-26 14:02:34,719 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:34,720 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:44103', status: init, memory: 0, processing: 0>
-2022-08-26 14:02:34,720 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:34,720 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:44103
-2022-08-26 14:02:34,720 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:34,721 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43923
-2022-08-26 14:02:34,721 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:34,722 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:34,727 - distributed.scheduler - INFO - Receive client connection: Client-687b5175-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:02:34,728 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:34,739 - distributed.scheduler - INFO - Remove client Client-687b5175-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:02:34,740 - distributed.scheduler - INFO - Remove client Client-687b5175-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:02:34,740 - distributed.scheduler - INFO - Close client connection: Client-687b5175-2582-11ed-a99d-00d861bc4509
-PASSED
-distributed/tests/test_client.py::test_async_persist PASSED
-distributed/tests/test_client.py::test__persist PASSED
-distributed/tests/test_client.py::test_persist 2022-08-26 14:02:36,072 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:02:36,074 - distributed.scheduler - INFO - State start
-2022-08-26 14:02:36,077 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:02:36,077 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:39407
-2022-08-26 14:02:36,077 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:02:36,079 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-zy7nz6y4', purging
-2022-08-26 14:02:36,080 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-9rvwk5wx', purging
-2022-08-26 14:02:36,086 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43697
-2022-08-26 14:02:36,086 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43697
-2022-08-26 14:02:36,086 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41059
-2022-08-26 14:02:36,086 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39407
-2022-08-26 14:02:36,086 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:36,086 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:02:36,086 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35165
-2022-08-26 14:02:36,086 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:02:36,086 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35165
-2022-08-26 14:02:36,086 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-jkz32194
-2022-08-26 14:02:36,086 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46731
-2022-08-26 14:02:36,086 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:36,087 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39407
-2022-08-26 14:02:36,087 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:36,087 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:02:36,087 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:02:36,087 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-9vggfql0
-2022-08-26 14:02:36,087 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:36,293 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:43697', status: init, memory: 0, processing: 0>
-2022-08-26 14:02:36,500 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:43697
-2022-08-26 14:02:36,501 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:36,501 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39407
-2022-08-26 14:02:36,501 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:36,502 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:35165', status: init, memory: 0, processing: 0>
-2022-08-26 14:02:36,502 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:36,502 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:35165
-2022-08-26 14:02:36,502 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:36,503 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39407
-2022-08-26 14:02:36,503 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:36,504 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:36,508 - distributed.scheduler - INFO - Receive client connection: Client-698b3353-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:02:36,509 - distributed.core - INFO - Starting established connection
-PASSED2022-08-26 14:02:36,542 - distributed.scheduler - INFO - Remove client Client-698b3353-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:02:36,542 - distributed.scheduler - INFO - Remove client Client-698b3353-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:02:36,543 - distributed.scheduler - INFO - Close client connection: Client-698b3353-2582-11ed-a99d-00d861bc4509
-
-distributed/tests/test_client.py::test_long_traceback 2022-08-26 14:02:36,621 - distributed.worker - WARNING - Compute Failed
-Key:       deep-343ef0918c0ac8a1d154dd61c891b146
-Function:  deep
-args:      (200)
-kwargs:    {}
-Exception: "ZeroDivisionError('division by zero')"
-
-PASSED
-distributed/tests/test_client.py::test_wait_on_collections PASSED
-distributed/tests/test_client.py::test_futures_of_get PASSED
-distributed/tests/test_client.py::test_futures_of_class PASSED
-distributed/tests/test_client.py::test_futures_of_cancelled_raises PASSED
-distributed/tests/test_client.py::test_dont_delete_recomputed_results SKIPPED
-distributed/tests/test_client.py::test_fatally_serialized_input PASSED
-distributed/tests/test_client.py::test_balance_tasks_by_stacks SKIPPED
-distributed/tests/test_client.py::test_run PASSED
-distributed/tests/test_client.py::test_run_handles_picklable_data PASSED
-distributed/tests/test_client.py::test_run_sync 2022-08-26 14:02:39,182 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:02:39,184 - distributed.scheduler - INFO - State start
-2022-08-26 14:02:39,187 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:02:39,187 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:42985
-2022-08-26 14:02:39,187 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:02:39,189 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-4udy319w', purging
-2022-08-26 14:02:39,189 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-sg53c2kv', purging
-2022-08-26 14:02:39,196 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39113
-2022-08-26 14:02:39,196 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39113
-2022-08-26 14:02:39,196 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41853
-2022-08-26 14:02:39,196 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:42985
-2022-08-26 14:02:39,196 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:39,196 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:02:39,196 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:02:39,196 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-09ru4480
-2022-08-26 14:02:39,196 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:39,196 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34381
-2022-08-26 14:02:39,196 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34381
-2022-08-26 14:02:39,196 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45129
-2022-08-26 14:02:39,196 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:42985
-2022-08-26 14:02:39,196 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:39,196 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:02:39,196 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:02:39,196 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-2x__anq4
-2022-08-26 14:02:39,197 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:39,408 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:34381', status: init, memory: 0, processing: 0>
-2022-08-26 14:02:39,613 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:34381
-2022-08-26 14:02:39,614 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:39,614 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:42985
-2022-08-26 14:02:39,614 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:39,615 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39113', status: init, memory: 0, processing: 0>
-2022-08-26 14:02:39,615 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39113
-2022-08-26 14:02:39,615 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:39,615 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:39,616 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:42985
-2022-08-26 14:02:39,616 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:39,617 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:39,622 - distributed.scheduler - INFO - Receive client connection: Client-6b663e90-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:02:39,622 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:39,626 - distributed.worker - INFO - Run out-of-band function 'func'
-2022-08-26 14:02:39,626 - distributed.worker - INFO - Run out-of-band function 'func'
-2022-08-26 14:02:39,629 - distributed.worker - INFO - Run out-of-band function 'func'
-PASSED2022-08-26 14:02:39,634 - distributed.scheduler - INFO - Remove client Client-6b663e90-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:02:39,634 - distributed.scheduler - INFO - Remove client Client-6b663e90-2582-11ed-a99d-00d861bc4509
-
-distributed/tests/test_client.py::test_run_coroutine 2022-08-26 14:02:39,788 - distributed.worker - WARNING - Run Failed
-Function: throws
-args:     (1)
-kwargs:   {}
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 3068, in run
-    result = function(*args, **kwargs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/utils_test.py", line 238, in throws
-    raise RuntimeError("hello!")
-RuntimeError: hello!
-2022-08-26 14:02:39,789 - distributed.worker - WARNING - Run Failed
-Function: throws
-args:     (1)
-kwargs:   {}
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 3068, in run
-    result = function(*args, **kwargs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/utils_test.py", line 238, in throws
-    raise RuntimeError("hello!")
-RuntimeError: hello!
-PASSED
-distributed/tests/test_client.py::test_run_coroutine_sync 2022-08-26 14:02:40,808 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/node.py:183: UserWarning: Port 8787 is already in use.
-Perhaps you already have a cluster running?
-Hosting the HTTP server on port 44777 instead
-  warnings.warn(
-2022-08-26 14:02:40,810 - distributed.scheduler - INFO - State start
-2022-08-26 14:02:40,814 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:02:40,814 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:44601
-2022-08-26 14:02:40,814 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:44777
-2022-08-26 14:02:40,817 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-r5ze39bg', purging
-2022-08-26 14:02:40,817 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-y0lu2mr5', purging
-2022-08-26 14:02:40,823 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37907
-2022-08-26 14:02:40,823 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37907
-2022-08-26 14:02:40,823 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34961
-2022-08-26 14:02:40,823 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44601
-2022-08-26 14:02:40,824 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:40,824 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:02:40,824 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36197
-2022-08-26 14:02:40,824 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:02:40,824 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36197
-2022-08-26 14:02:40,824 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-7vymzrvy
-2022-08-26 14:02:40,824 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46337
-2022-08-26 14:02:40,824 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:40,824 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44601
-2022-08-26 14:02:40,824 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:40,824 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:02:40,824 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:02:40,824 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-4ycbel9x
-2022-08-26 14:02:40,824 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:41,029 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37907', status: init, memory: 0, processing: 0>
-2022-08-26 14:02:41,235 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37907
-2022-08-26 14:02:41,235 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:41,235 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44601
-2022-08-26 14:02:41,235 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:41,236 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36197', status: init, memory: 0, processing: 0>
-2022-08-26 14:02:41,236 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:41,236 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36197
-2022-08-26 14:02:41,236 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:41,236 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44601
-2022-08-26 14:02:41,237 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:41,237 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:41,243 - distributed.scheduler - INFO - Receive client connection: Client-6c5d970c-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:02:41,243 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:41,246 - distributed.worker - INFO - Run out-of-band function 'asyncinc'
-2022-08-26 14:02:41,246 - distributed.worker - INFO - Run out-of-band function 'asyncinc'
-2022-08-26 14:02:41,260 - distributed.worker - INFO - Run out-of-band function 'asyncinc'
-2022-08-26 14:02:41,283 - distributed.worker - INFO - Run out-of-band function 'asyncinc'
-2022-08-26 14:02:41,283 - distributed.worker - INFO - Run out-of-band function 'asyncinc'
-PASSED2022-08-26 14:02:41,285 - distributed.scheduler - INFO - Remove client Client-6c5d970c-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:02:41,285 - distributed.scheduler - INFO - Remove client Client-6c5d970c-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:02:41,285 - distributed.scheduler - INFO - Close client connection: Client-6c5d970c-2582-11ed-a99d-00d861bc4509
-
-distributed/tests/test_client.py::test_run_exception 2022-08-26 14:02:41,339 - distributed.worker - WARNING - Run Failed
-Function: raise_exception
-args:     ()
-kwargs:   {'addr': 'tcp://127.0.0.1:38469', 'dask_worker': <Worker 'tcp://127.0.0.1:38469', name: 0, status: running, stored: 0, running: 0/1, ready: 0, comm: 0, waiting: 0>}
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 3068, in run
-    result = function(*args, **kwargs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/tests/test_client.py", line 2703, in raise_exception
-    raise MyError("informative message")
-test_client.test_run_exception.<locals>.MyError: informative message
-2022-08-26 14:02:41,344 - distributed.worker - WARNING - Run Failed
-Function: raise_exception
-args:     ()
-kwargs:   {'addr': 'tcp://127.0.0.1:38469', 'dask_worker': <Worker 'tcp://127.0.0.1:38469', name: 0, status: running, stored: 0, running: 0/1, ready: 0, comm: 0, waiting: 0>}
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 3068, in run
-    result = function(*args, **kwargs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/tests/test_client.py", line 2703, in raise_exception
-    raise MyError("informative message")
-test_client.test_run_exception.<locals>.MyError: informative message
-2022-08-26 14:02:41,349 - distributed.worker - WARNING - Run Failed
-Function: raise_exception
-args:     ()
-kwargs:   {'addr': 'tcp://127.0.0.1:38469', 'dask_worker': <Worker 'tcp://127.0.0.1:38469', name: 0, status: running, stored: 0, running: 0/1, ready: 0, comm: 0, waiting: 0>}
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 3068, in run
-    result = function(*args, **kwargs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/tests/test_client.py", line 2703, in raise_exception
-    raise MyError("informative message")
-test_client.test_run_exception.<locals>.MyError: informative message
-2022-08-26 14:02:41,354 - distributed.worker - WARNING - Run Failed
-Function: raise_exception
-args:     ()
-kwargs:   {'addr': 'tcp://127.0.0.1:38469', 'dask_worker': <Worker 'tcp://127.0.0.1:38469', name: 0, status: running, stored: 0, running: 0/1, ready: 0, comm: 0, waiting: 0>}
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 3068, in run
-    result = function(*args, **kwargs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/tests/test_client.py", line 2703, in raise_exception
-    raise MyError("informative message")
-test_client.test_run_exception.<locals>.MyError: informative message
-2022-08-26 14:02:41,359 - distributed.worker - WARNING - Run Failed
-Function: raise_exception
-args:     ()
-kwargs:   {'addr': 'tcp://127.0.0.1:38469', 'dask_worker': <Worker 'tcp://127.0.0.1:38469', name: 0, status: running, stored: 0, running: 0/1, ready: 0, comm: 0, waiting: 0>}
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 3068, in run
-    result = function(*args, **kwargs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/tests/test_client.py", line 2703, in raise_exception
-    raise MyError("informative message")
-test_client.test_run_exception.<locals>.MyError: informative message
-PASSED
-distributed/tests/test_client.py::test_run_rpc_error 2022-08-26 14:02:41,788 - distributed.scheduler - ERROR - broadcast to tcp://127.0.0.1:34031 failed: OSError: Timed out trying to connect to tcp://127.0.0.1:34031 after 0.2 s
-2022-08-26 14:02:41,990 - distributed.scheduler - ERROR - broadcast to tcp://127.0.0.1:34031 failed: OSError: Timed out trying to connect to tcp://127.0.0.1:34031 after 0.2 s
-2022-08-26 14:02:42,192 - distributed.scheduler - ERROR - broadcast to tcp://127.0.0.1:34031 failed: OSError: Timed out trying to connect to tcp://127.0.0.1:34031 after 0.2 s
-2022-08-26 14:02:42,393 - distributed.scheduler - ERROR - broadcast to tcp://127.0.0.1:34031 failed: OSError: Timed out trying to connect to tcp://127.0.0.1:34031 after 0.2 s
-PASSED
-distributed/tests/test_client.py::test_diagnostic_ui 2022-08-26 14:02:43,384 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:02:43,386 - distributed.scheduler - INFO - State start
-2022-08-26 14:02:43,389 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:02:43,389 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:36685
-2022-08-26 14:02:43,389 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:02:43,391 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-knpw_soa', purging
-2022-08-26 14:02:43,391 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-t_y1gt8u', purging
-2022-08-26 14:02:43,398 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43421
-2022-08-26 14:02:43,398 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43421
-2022-08-26 14:02:43,398 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46313
-2022-08-26 14:02:43,398 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36685
-2022-08-26 14:02:43,399 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:43,399 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:02:43,399 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:02:43,399 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ouq6q4lf
-2022-08-26 14:02:43,399 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:43,404 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36913
-2022-08-26 14:02:43,404 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36913
-2022-08-26 14:02:43,404 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46277
-2022-08-26 14:02:43,404 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36685
-2022-08-26 14:02:43,404 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:43,404 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:02:43,404 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:02:43,404 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-t_5_frzm
-2022-08-26 14:02:43,404 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:43,627 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:43421', status: init, memory: 0, processing: 0>
-2022-08-26 14:02:43,828 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:43421
-2022-08-26 14:02:43,829 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:43,829 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36685
-2022-08-26 14:02:43,829 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:43,830 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36913', status: init, memory: 0, processing: 0>
-2022-08-26 14:02:43,830 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:43,830 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36913
-2022-08-26 14:02:43,830 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:43,830 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36685
-2022-08-26 14:02:43,831 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:43,831 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:43,836 - distributed.scheduler - INFO - Receive client connection: Client-6de94d44-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:02:43,836 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:43,859 - distributed.scheduler - INFO - Remove client Client-6de94d44-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:02:43,859 - distributed.scheduler - INFO - Remove client Client-6de94d44-2582-11ed-a99d-00d861bc4509
-PASSED
-distributed/tests/test_client.py::test_diagnostic_nbytes_sync 2022-08-26 14:02:44,668 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:02:44,670 - distributed.scheduler - INFO - State start
-2022-08-26 14:02:44,673 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:02:44,673 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:35595
-2022-08-26 14:02:44,673 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:02:44,676 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-5nkmr19l', purging
-2022-08-26 14:02:44,676 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-ouq6q4lf', purging
-2022-08-26 14:02:44,676 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-1js0pucg', purging
-2022-08-26 14:02:44,676 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-t_5_frzm', purging
-2022-08-26 14:02:44,682 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43301
-2022-08-26 14:02:44,682 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43301
-2022-08-26 14:02:44,682 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42191
-2022-08-26 14:02:44,682 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35595
-2022-08-26 14:02:44,682 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:44,682 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:02:44,682 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:02:44,682 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-m2826il2
-2022-08-26 14:02:44,682 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:44,683 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:41451
-2022-08-26 14:02:44,683 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:41451
-2022-08-26 14:02:44,683 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33387
-2022-08-26 14:02:44,683 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35595
-2022-08-26 14:02:44,683 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:44,683 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:02:44,683 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:02:44,683 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-5ocnzso1
-2022-08-26 14:02:44,683 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:44,891 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:41451', status: init, memory: 0, processing: 0>
-2022-08-26 14:02:45,092 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:41451
-2022-08-26 14:02:45,093 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:45,093 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35595
-2022-08-26 14:02:45,093 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:45,093 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:43301', status: init, memory: 0, processing: 0>
-2022-08-26 14:02:45,094 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:43301
-2022-08-26 14:02:45,094 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:45,094 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:45,094 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35595
-2022-08-26 14:02:45,094 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:45,095 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:45,100 - distributed.scheduler - INFO - Receive client connection: Client-6eaa35f4-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:02:45,100 - distributed.core - INFO - Starting established connection
-PASSED2022-08-26 14:02:45,122 - distributed.scheduler - INFO - Remove client Client-6eaa35f4-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:02:45,123 - distributed.scheduler - INFO - Remove client Client-6eaa35f4-2582-11ed-a99d-00d861bc4509
-
-distributed/tests/test_client.py::test_diagnostic_nbytes PASSED
-distributed/tests/test_client.py::test_worker_aliases PASSED
-distributed/tests/test_client.py::test_persist_get_sync 2022-08-26 14:02:46,464 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:02:46,467 - distributed.scheduler - INFO - State start
-2022-08-26 14:02:46,469 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:02:46,469 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:33641
-2022-08-26 14:02:46,469 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:02:46,472 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-17ioud87', purging
-2022-08-26 14:02:46,472 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-erzgxs6f', purging
-2022-08-26 14:02:46,479 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42203
-2022-08-26 14:02:46,479 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42203
-2022-08-26 14:02:46,479 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34391
-2022-08-26 14:02:46,479 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33641
-2022-08-26 14:02:46,479 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:46,479 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:02:46,479 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:02:46,479 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-r_itbwp7
-2022-08-26 14:02:46,479 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:46,479 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43989
-2022-08-26 14:02:46,479 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43989
-2022-08-26 14:02:46,479 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40543
-2022-08-26 14:02:46,479 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33641
-2022-08-26 14:02:46,479 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:46,479 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:02:46,479 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:02:46,480 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-68vb1sh7
-2022-08-26 14:02:46,480 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:46,688 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42203', status: init, memory: 0, processing: 0>
-2022-08-26 14:02:46,895 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42203
-2022-08-26 14:02:46,895 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:46,895 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33641
-2022-08-26 14:02:46,895 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:46,896 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:43989', status: init, memory: 0, processing: 0>
-2022-08-26 14:02:46,896 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:46,896 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:43989
-2022-08-26 14:02:46,896 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:46,897 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33641
-2022-08-26 14:02:46,897 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:46,898 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:46,904 - distributed.scheduler - INFO - Receive client connection: Client-6fbd4d47-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:02:46,904 - distributed.core - INFO - Starting established connection
-PASSED2022-08-26 14:02:46,953 - distributed.scheduler - INFO - Remove client Client-6fbd4d47-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:02:46,953 - distributed.scheduler - INFO - Remove client Client-6fbd4d47-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:02:46,954 - distributed.scheduler - INFO - Close client connection: Client-6fbd4d47-2582-11ed-a99d-00d861bc4509
-
-distributed/tests/test_client.py::test_persist_get PASSED
-distributed/tests/test_client.py::test_client_num_fds 2022-08-26 14:02:48,537 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/node.py:183: UserWarning: Port 8787 is already in use.
-Perhaps you already have a cluster running?
-Hosting the HTTP server on port 43189 instead
-  warnings.warn(
-2022-08-26 14:02:48,540 - distributed.scheduler - INFO - State start
-2022-08-26 14:02:48,543 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:02:48,543 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:45085
-2022-08-26 14:02:48,543 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:43189
-2022-08-26 14:02:48,553 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45609
-2022-08-26 14:02:48,553 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45609
-2022-08-26 14:02:48,553 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35225
-2022-08-26 14:02:48,553 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:45085
-2022-08-26 14:02:48,553 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:48,553 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:02:48,553 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:02:48,553 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-0z3rgaj4
-2022-08-26 14:02:48,553 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:48,553 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33885
-2022-08-26 14:02:48,553 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33885
-2022-08-26 14:02:48,553 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35279
-2022-08-26 14:02:48,553 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:45085
-2022-08-26 14:02:48,553 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:48,553 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:02:48,553 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:02:48,553 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-lf7x28o6
-2022-08-26 14:02:48,554 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:48,766 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45609', status: init, memory: 0, processing: 0>
-2022-08-26 14:02:48,981 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45609
-2022-08-26 14:02:48,982 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:48,982 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:45085
-2022-08-26 14:02:48,982 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:48,982 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33885', status: init, memory: 0, processing: 0>
-2022-08-26 14:02:48,983 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33885
-2022-08-26 14:02:48,983 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:48,983 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:48,983 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:45085
-2022-08-26 14:02:48,983 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:48,984 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:48,989 - distributed.scheduler - INFO - Receive client connection: Client-70fba42d-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:02:48,989 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:48,993 - distributed.scheduler - INFO - Receive client connection: Client-70fc3827-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:02:48,993 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:49,004 - distributed.scheduler - INFO - Remove client Client-70fc3827-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:02:49,005 - distributed.scheduler - INFO - Remove client Client-70fc3827-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:02:49,005 - distributed.scheduler - INFO - Close client connection: Client-70fc3827-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:02:49,007 - distributed.scheduler - INFO - Receive client connection: Client-70fe7288-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:02:49,008 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:49,019 - distributed.scheduler - INFO - Remove client Client-70fe7288-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:02:49,019 - distributed.scheduler - INFO - Remove client Client-70fe7288-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:02:49,019 - distributed.scheduler - INFO - Close client connection: Client-70fe7288-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:02:49,022 - distributed.scheduler - INFO - Receive client connection: Client-7100ad60-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:02:49,022 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:49,034 - distributed.scheduler - INFO - Remove client Client-7100ad60-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:02:49,034 - distributed.scheduler - INFO - Remove client Client-7100ad60-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:02:49,034 - distributed.scheduler - INFO - Close client connection: Client-7100ad60-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:02:49,036 - distributed.scheduler - INFO - Receive client connection: Client-7102e5de-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:02:49,037 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:49,048 - distributed.scheduler - INFO - Remove client Client-7102e5de-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:02:49,048 - distributed.scheduler - INFO - Remove client Client-7102e5de-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:02:49,048 - distributed.scheduler - INFO - Close client connection: Client-7102e5de-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:02:49,049 - distributed.scheduler - INFO - Remove client Client-70fba42d-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:02:49,049 - distributed.scheduler - INFO - Remove client Client-70fba42d-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:02:49,049 - distributed.scheduler - INFO - Close client connection: Client-70fba42d-2582-11ed-a99d-00d861bc4509
-PASSED
-distributed/tests/test_client.py::test_startup_close_startup PASSED
-distributed/tests/test_client.py::test_startup_close_startup_sync 2022-08-26 14:02:50,115 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:02:50,117 - distributed.scheduler - INFO - State start
-2022-08-26 14:02:50,120 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:02:50,120 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:35209
-2022-08-26 14:02:50,120 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:02:50,122 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-086mgyrt', purging
-2022-08-26 14:02:50,122 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-fuaivghm', purging
-2022-08-26 14:02:50,130 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39187
-2022-08-26 14:02:50,130 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39187
-2022-08-26 14:02:50,130 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42687
-2022-08-26 14:02:50,130 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35209
-2022-08-26 14:02:50,131 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:50,131 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:02:50,131 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:02:50,131 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-mt54yzi7
-2022-08-26 14:02:50,131 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:50,133 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37053
-2022-08-26 14:02:50,133 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37053
-2022-08-26 14:02:50,133 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35483
-2022-08-26 14:02:50,133 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35209
-2022-08-26 14:02:50,133 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:50,133 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:02:50,133 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:02:50,133 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-hxsd2xi9
-2022-08-26 14:02:50,133 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:50,343 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39187', status: init, memory: 0, processing: 0>
-2022-08-26 14:02:50,549 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39187
-2022-08-26 14:02:50,549 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:50,549 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35209
-2022-08-26 14:02:50,549 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:50,550 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37053', status: init, memory: 0, processing: 0>
-2022-08-26 14:02:50,550 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37053
-2022-08-26 14:02:50,550 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:50,550 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:50,550 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35209
-2022-08-26 14:02:50,551 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:50,551 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:50,556 - distributed.scheduler - INFO - Receive client connection: Client-71eab047-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:02:50,556 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:50,657 - distributed.scheduler - INFO - Remove client Client-71eab047-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:02:50,658 - distributed.scheduler - INFO - Remove client Client-71eab047-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:02:50,658 - distributed.scheduler - INFO - Close client connection: Client-71eab047-2582-11ed-a99d-00d861bc4509
-FAILED
-distributed/tests/test_client.py::test_badly_serialized_exceptions 2022-08-26 14:02:50,974 - distributed.worker - WARNING - Compute Failed
-Key:       f-6c8dff9e3384459bf297e46f858c7727
-Function:  f
-args:      ()
-kwargs:    {}
-Exception: "BadlySerializedException('hello world')"
-
-PASSED
-distributed/tests/test_client.py::test_rebalance PASSED
-distributed/tests/test_client.py::test_rebalance_workers_and_keys 2022-08-26 14:02:51,504 - distributed.core - ERROR - 'notexist'
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/utils.py", line 799, in wrapper
-    return await func(*args, **kwargs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/scheduler.py", line 5532, in rebalance
-    wss = [self.workers[w] for w in workers]
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/scheduler.py", line 5532, in <listcomp>
-    wss = [self.workers[w] for w in workers]
-KeyError: 'notexist'
-2022-08-26 14:02:51,505 - distributed.core - ERROR - Exception while handling op rebalance
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 770, in _handle_comm
-    result = await result
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/utils.py", line 799, in wrapper
-    return await func(*args, **kwargs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/scheduler.py", line 5532, in rebalance
-    wss = [self.workers[w] for w in workers]
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/scheduler.py", line 5532, in <listcomp>
-    wss = [self.workers[w] for w in workers]
-KeyError: 'notexist'
-PASSED
-distributed/tests/test_client.py::test_rebalance_sync PASSED
-distributed/tests/test_client.py::test_rebalance_unprepared PASSED
-distributed/tests/test_client.py::test_rebalance_raises_on_explicit_missing_data PASSED
-distributed/tests/test_client.py::test_receive_lost_key PASSED
-distributed/tests/test_client.py::test_unrunnable_task_runs PASSED
-distributed/tests/test_client.py::test_add_worker_after_tasks 2022-08-26 14:02:53,861 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-lj5qbu_w', purging
-2022-08-26 14:02:53,861 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-64tim94_', purging
-2022-08-26 14:02:53,866 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38537
-2022-08-26 14:02:53,866 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38537
-2022-08-26 14:02:53,866 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42245
-2022-08-26 14:02:53,866 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36545
-2022-08-26 14:02:53,866 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:53,866 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:02:53,867 - distributed.worker - INFO -                Memory:                  10.47 GiB
-2022-08-26 14:02:53,867 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-pamoz_6b
-2022-08-26 14:02:53,867 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:54,104 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36545
-2022-08-26 14:02:54,104 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:54,105 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:54,306 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:38537
-2022-08-26 14:02:54,307 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-f59f60a9-c519-40b3-a0ac-b00d4412663e Address tcp://127.0.0.1:38537 Status: Status.closing
-PASSED
-distributed/tests/test_client.py::test_workers_register_indirect_data PASSED
-distributed/tests/test_client.py::test_submit_on_cancelled_future PASSED
-distributed/tests/test_client.py::test_replicate PASSED
-distributed/tests/test_client.py::test_replicate_tuple_keys PASSED
-distributed/tests/test_client.py::test_replicate_workers PASSED
-distributed/tests/test_client.py::test_replicate_tree_branching PASSED
-distributed/tests/test_client.py::test_client_replicate PASSED
-distributed/tests/test_client.py::test_client_replicate_host PASSED
-distributed/tests/test_client.py::test_client_replicate_sync 2022-08-26 14:02:57,814 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:02:57,817 - distributed.scheduler - INFO - State start
-2022-08-26 14:02:57,819 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:02:57,820 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:43775
-2022-08-26 14:02:57,820 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:02:57,822 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-cw965qok', purging
-2022-08-26 14:02:57,822 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-pcrj_4la', purging
-2022-08-26 14:02:57,829 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33207
-2022-08-26 14:02:57,829 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33207
-2022-08-26 14:02:57,829 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46457
-2022-08-26 14:02:57,829 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43775
-2022-08-26 14:02:57,830 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:57,830 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:02:57,830 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:02:57,830 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-b6aw70ho
-2022-08-26 14:02:57,830 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:57,837 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35863
-2022-08-26 14:02:57,837 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35863
-2022-08-26 14:02:57,837 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46167
-2022-08-26 14:02:57,837 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43775
-2022-08-26 14:02:57,838 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:57,838 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:02:57,838 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:02:57,838 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-avi6vvag
-2022-08-26 14:02:57,838 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:58,041 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33207', status: init, memory: 0, processing: 0>
-2022-08-26 14:02:58,251 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33207
-2022-08-26 14:02:58,251 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:58,251 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43775
-2022-08-26 14:02:58,251 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:58,251 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:35863', status: init, memory: 0, processing: 0>
-2022-08-26 14:02:58,252 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:58,252 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:35863
-2022-08-26 14:02:58,252 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:58,252 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43775
-2022-08-26 14:02:58,253 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:02:58,253 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:58,258 - distributed.scheduler - INFO - Receive client connection: Client-7682042f-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:02:58,259 - distributed.core - INFO - Starting established connection
-2022-08-26 14:02:58,281 - distributed.core - ERROR - Exception while handling op replicate
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 770, in _handle_comm
-    result = await result
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/scheduler.py", line 5854, in replicate
-    raise ValueError("Can not use replicate to delete data")
-ValueError: Can not use replicate to delete data
-PASSED2022-08-26 14:02:58,359 - distributed.scheduler - INFO - Remove client Client-7682042f-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:02:58,359 - distributed.scheduler - INFO - Remove client Client-7682042f-2582-11ed-a99d-00d861bc4509
-
-distributed/tests/test_client.py::test_task_load_adapts_quickly PASSED
-distributed/tests/test_client.py::test_even_load_after_fast_functions PASSED
-distributed/tests/test_client.py::test_even_load_on_startup PASSED
-distributed/tests/test_client.py::test_contiguous_load SKIPPED (unco...)
-distributed/tests/test_client.py::test_balanced_with_submit PASSED
-distributed/tests/test_client.py::test_balanced_with_submit_and_resident_data PASSED
-distributed/tests/test_client.py::test_scheduler_saturates_cores PASSED
-distributed/tests/test_client.py::test_scheduler_saturates_cores_random PASSED
-distributed/tests/test_client.py::test_cancel_clears_processing PASSED
-distributed/tests/test_client.py::test_default_get 2022-08-26 14:03:02,179 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/node.py:183: UserWarning: Port 8787 is already in use.
-Perhaps you already have a cluster running?
-Hosting the HTTP server on port 39933 instead
-  warnings.warn(
-2022-08-26 14:03:02,182 - distributed.scheduler - INFO - State start
-2022-08-26 14:03:02,184 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:03:02,184 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:43257
-2022-08-26 14:03:02,184 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:39933
-2022-08-26 14:03:02,193 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:41695
-2022-08-26 14:03:02,193 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:41695
-2022-08-26 14:03:02,193 - distributed.worker - INFO -          dashboard at:            127.0.0.1:44109
-2022-08-26 14:03:02,193 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43257
-2022-08-26 14:03:02,193 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:02,193 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:03:02,193 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:03:02,193 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-d6vuwx08
-2022-08-26 14:03:02,193 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:02,193 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43161
-2022-08-26 14:03:02,194 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43161
-2022-08-26 14:03:02,194 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43519
-2022-08-26 14:03:02,194 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43257
-2022-08-26 14:03:02,194 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:02,194 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:03:02,194 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:03:02,194 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-lwjbz3a8
-2022-08-26 14:03:02,194 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:02,401 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:41695', status: init, memory: 0, processing: 0>
-2022-08-26 14:03:02,608 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:41695
-2022-08-26 14:03:02,608 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:02,608 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43257
-2022-08-26 14:03:02,608 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:02,609 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:43161', status: init, memory: 0, processing: 0>
-2022-08-26 14:03:02,609 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:02,610 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:43161
-2022-08-26 14:03:02,610 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:02,610 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43257
-2022-08-26 14:03:02,610 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:02,611 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:02,615 - distributed.scheduler - INFO - Receive client connection: Client-791ad85c-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:03:02,616 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:02,627 - distributed.scheduler - INFO - Remove client Client-791ad85c-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:03:02,627 - distributed.scheduler - INFO - Remove client Client-791ad85c-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:03:02,627 - distributed.scheduler - INFO - Close client connection: Client-791ad85c-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:03:02,630 - distributed.scheduler - INFO - Receive client connection: Client-791d1d10-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:03:02,630 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:02,676 - distributed.scheduler - INFO - Remove client Client-791d1d10-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:03:02,676 - distributed.scheduler - INFO - Remove client Client-791d1d10-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:03:02,676 - distributed.scheduler - INFO - Close client connection: Client-791d1d10-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:03:02,680 - distributed.scheduler - INFO - Receive client connection: Client-79248f96-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:03:02,680 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:02,692 - distributed.scheduler - INFO - Remove client Client-79248f96-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:03:02,692 - distributed.scheduler - INFO - Remove client Client-79248f96-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:03:02,692 - distributed.scheduler - INFO - Close client connection: Client-79248f96-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:03:02,696 - distributed.scheduler - INFO - Receive client connection: Client-792702e6-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:03:02,696 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:02,708 - distributed.scheduler - INFO - Remove client Client-792702e6-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:03:02,708 - distributed.scheduler - INFO - Remove client Client-792702e6-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:03:02,708 - distributed.scheduler - INFO - Close client connection: Client-792702e6-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:03:02,712 - distributed.scheduler - INFO - Receive client connection: Client-7929725b-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:03:02,712 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:02,724 - distributed.scheduler - INFO - Remove client Client-7929725b-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:03:02,724 - distributed.scheduler - INFO - Remove client Client-7929725b-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:03:02,724 - distributed.scheduler - INFO - Close client connection: Client-7929725b-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:03:02,727 - distributed.scheduler - INFO - Receive client connection: Client-792bdaa6-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:03:02,728 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:02,732 - distributed.scheduler - INFO - Receive client connection: Client-792c8485-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:03:02,732 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:02,744 - distributed.scheduler - INFO - Remove client Client-792c8485-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:03:02,744 - distributed.scheduler - INFO - Remove client Client-792c8485-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:03:02,744 - distributed.scheduler - INFO - Close client connection: Client-792c8485-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:03:02,745 - distributed.scheduler - INFO - Remove client Client-792bdaa6-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:03:02,745 - distributed.scheduler - INFO - Remove client Client-792bdaa6-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:03:02,745 - distributed.scheduler - INFO - Close client connection: Client-792bdaa6-2582-11ed-a99d-00d861bc4509
-PASSED
-distributed/tests/test_client.py::test_ensure_default_client PASSED
-distributed/tests/test_client.py::test_set_as_default PASSED
-distributed/tests/test_client.py::test_get_foo PASSED
-distributed/tests/test_client.py::test_get_foo_lost_keys PASSED
-distributed/tests/test_client.py::test_bad_tasks_fail SKIPPED (need ...)
-distributed/tests/test_client.py::test_get_processing_sync 2022-08-26 14:03:04,590 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/node.py:183: UserWarning: Port 8787 is already in use.
-Perhaps you already have a cluster running?
-Hosting the HTTP server on port 41587 instead
-  warnings.warn(
-2022-08-26 14:03:04,593 - distributed.scheduler - INFO - State start
-2022-08-26 14:03:04,595 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:03:04,596 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:34839
-2022-08-26 14:03:04,596 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:41587
-2022-08-26 14:03:04,605 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43109
-2022-08-26 14:03:04,605 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43109
-2022-08-26 14:03:04,605 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35385
-2022-08-26 14:03:04,605 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34839
-2022-08-26 14:03:04,605 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:04,605 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:03:04,605 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:03:04,605 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-7ij08ld0
-2022-08-26 14:03:04,605 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:04,609 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38253
-2022-08-26 14:03:04,609 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38253
-2022-08-26 14:03:04,609 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42717
-2022-08-26 14:03:04,609 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34839
-2022-08-26 14:03:04,609 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:04,609 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:03:04,609 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:03:04,609 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ev0kdvpw
-2022-08-26 14:03:04,609 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:04,815 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:43109', status: init, memory: 0, processing: 0>
-2022-08-26 14:03:05,037 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:43109
-2022-08-26 14:03:05,038 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:05,038 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34839
-2022-08-26 14:03:05,038 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:05,038 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38253', status: init, memory: 0, processing: 0>
-2022-08-26 14:03:05,039 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:05,039 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38253
-2022-08-26 14:03:05,039 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:05,039 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34839
-2022-08-26 14:03:05,039 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:05,040 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:05,046 - distributed.scheduler - INFO - Receive client connection: Client-7a8da3c3-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:03:05,046 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:05,253 - distributed.scheduler - INFO - Client Client-7a8da3c3-2582-11ed-a99d-00d861bc4509 requests to cancel 10 keys
-2022-08-26 14:03:05,253 - distributed.scheduler - INFO - Scheduler cancels key slowinc-2124fd26f425a70c5cc03022ddd82cb5.  Force=False
-2022-08-26 14:03:05,253 - distributed.scheduler - INFO - Scheduler cancels key slowinc-619e7d83b71ce96b7beaca9b3333f0d9.  Force=False
-2022-08-26 14:03:05,253 - distributed.scheduler - INFO - Scheduler cancels key slowinc-2a0c1b8bc59a90ae73f7ce799cbe6be1.  Force=False
-2022-08-26 14:03:05,253 - distributed.scheduler - INFO - Scheduler cancels key slowinc-1abe6beef5a972738b95fd429f282599.  Force=False
-2022-08-26 14:03:05,253 - distributed.scheduler - INFO - Scheduler cancels key slowinc-a22b45eb58e0fc93838f76915c4c23cc.  Force=False
-2022-08-26 14:03:05,253 - distributed.scheduler - INFO - Scheduler cancels key slowinc-8c7a2cf658ac290e6e12ef453d939b47.  Force=False
-2022-08-26 14:03:05,253 - distributed.scheduler - INFO - Scheduler cancels key slowinc-fb38aefd9a197d359dca3ace96f7da2f.  Force=False
-2022-08-26 14:03:05,254 - distributed.scheduler - INFO - Scheduler cancels key slowinc-9393a294e973096da250ae88410b0e5f.  Force=False
-2022-08-26 14:03:05,254 - distributed.scheduler - INFO - Scheduler cancels key slowinc-61c7df7905f6fd7823511cbb6c488709.  Force=False
-2022-08-26 14:03:05,254 - distributed.scheduler - INFO - Scheduler cancels key slowinc-5a1bfed57e51bfa30d90dc387c0fdeed.  Force=False
-PASSED2022-08-26 14:03:05,266 - distributed.scheduler - INFO - Remove client Client-7a8da3c3-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:03:05,267 - distributed.scheduler - INFO - Remove client Client-7a8da3c3-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:03:05,267 - distributed.scheduler - INFO - Close client connection: Client-7a8da3c3-2582-11ed-a99d-00d861bc4509
-
-distributed/tests/test_client.py::test_close_idempotent 2022-08-26 14:03:06,091 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/node.py:183: UserWarning: Port 8787 is already in use.
-Perhaps you already have a cluster running?
-Hosting the HTTP server on port 40153 instead
-  warnings.warn(
-2022-08-26 14:03:06,094 - distributed.scheduler - INFO - State start
-2022-08-26 14:03:06,097 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:03:06,097 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:34217
-2022-08-26 14:03:06,097 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:40153
-2022-08-26 14:03:06,106 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35481
-2022-08-26 14:03:06,106 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35481
-2022-08-26 14:03:06,106 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43381
-2022-08-26 14:03:06,106 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34217
-2022-08-26 14:03:06,106 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:06,106 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:03:06,106 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:03:06,106 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-gelkuo8h
-2022-08-26 14:03:06,106 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:06,115 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33795
-2022-08-26 14:03:06,115 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33795
-2022-08-26 14:03:06,115 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34897
-2022-08-26 14:03:06,115 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34217
-2022-08-26 14:03:06,115 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:06,115 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:03:06,115 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:03:06,115 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-hr0uyi3r
-2022-08-26 14:03:06,115 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:06,324 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:35481', status: init, memory: 0, processing: 0>
-2022-08-26 14:03:06,542 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:35481
-2022-08-26 14:03:06,542 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:06,542 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34217
-2022-08-26 14:03:06,543 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:06,543 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33795', status: init, memory: 0, processing: 0>
-2022-08-26 14:03:06,543 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:06,543 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33795
-2022-08-26 14:03:06,544 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:06,544 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34217
-2022-08-26 14:03:06,544 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:06,545 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:06,550 - distributed.scheduler - INFO - Receive client connection: Client-7b733401-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:03:06,551 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:06,563 - distributed.scheduler - INFO - Remove client Client-7b733401-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:03:06,563 - distributed.scheduler - INFO - Remove client Client-7b733401-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:03:06,563 - distributed.scheduler - INFO - Close client connection: Client-7b733401-2582-11ed-a99d-00d861bc4509
-PASSED
-distributed/tests/test_client.py::test_get_returns_early 2022-08-26 14:03:07,410 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/node.py:183: UserWarning: Port 8787 is already in use.
-Perhaps you already have a cluster running?
-Hosting the HTTP server on port 46281 instead
-  warnings.warn(
-2022-08-26 14:03:07,413 - distributed.scheduler - INFO - State start
-2022-08-26 14:03:07,415 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:03:07,415 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:45667
-2022-08-26 14:03:07,416 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:46281
-2022-08-26 14:03:07,424 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33051
-2022-08-26 14:03:07,424 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33051
-2022-08-26 14:03:07,424 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45557
-2022-08-26 14:03:07,424 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:45667
-2022-08-26 14:03:07,424 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:07,424 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:03:07,424 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:03:07,424 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-y9k1x50m
-2022-08-26 14:03:07,425 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:07,425 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:44037
-2022-08-26 14:03:07,425 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:44037
-2022-08-26 14:03:07,425 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41265
-2022-08-26 14:03:07,425 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:45667
-2022-08-26 14:03:07,425 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:07,425 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:03:07,425 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:03:07,425 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-a3p2c1b0
-2022-08-26 14:03:07,425 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:07,633 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33051', status: init, memory: 0, processing: 0>
-2022-08-26 14:03:07,842 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33051
-2022-08-26 14:03:07,842 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:07,842 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:45667
-2022-08-26 14:03:07,843 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:07,843 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:44037', status: init, memory: 0, processing: 0>
-2022-08-26 14:03:07,843 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:44037
-2022-08-26 14:03:07,843 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:07,843 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:07,844 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:45667
-2022-08-26 14:03:07,844 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:07,845 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:07,849 - distributed.scheduler - INFO - Receive client connection: Client-7c398045-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:03:07,850 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:07,912 - distributed.scheduler - INFO - Receive client connection: Client-worker-7c42a582-2582-11ed-b7b2-00d861bc4509
-2022-08-26 14:03:07,912 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:07,985 - distributed.worker - WARNING - Compute Failed
-Key:       x
-Function:  throws
-args:      (1)
-kwargs:    {}
-Exception: "RuntimeError('hello!')"
-
-2022-08-26 14:03:08,139 - distributed.worker - WARNING - Compute Failed
-Key:       x
-Function:  throws
-args:      (1)
-kwargs:    {}
-Exception: "RuntimeError('hello!')"
-
-PASSED2022-08-26 14:03:08,152 - distributed.scheduler - INFO - Remove client Client-7c398045-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:03:08,152 - distributed.scheduler - INFO - Remove client Client-7c398045-2582-11ed-a99d-00d861bc4509
-
-distributed/tests/test_client.py::test_Client_clears_references_after_restart SKIPPED
-distributed/tests/test_client.py::test_get_stops_work_after_error 2022-08-26 14:03:09,006 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:03:09,008 - distributed.scheduler - INFO - State start
-2022-08-26 14:03:09,011 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:03:09,011 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:40091
-2022-08-26 14:03:09,011 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:03:09,014 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-mbxrq9ra', purging
-2022-08-26 14:03:09,014 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-e2qghli7', purging
-2022-08-26 14:03:09,020 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40187
-2022-08-26 14:03:09,020 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40187
-2022-08-26 14:03:09,020 - distributed.worker - INFO -          dashboard at:            127.0.0.1:37197
-2022-08-26 14:03:09,020 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:40091
-2022-08-26 14:03:09,020 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:09,020 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:03:09,021 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:03:09,021 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-mgxtqefi
-2022-08-26 14:03:09,021 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:09,021 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:41245
-2022-08-26 14:03:09,021 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:41245
-2022-08-26 14:03:09,021 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33819
-2022-08-26 14:03:09,021 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:40091
-2022-08-26 14:03:09,021 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:09,021 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:03:09,021 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:03:09,021 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-r5cek5ug
-2022-08-26 14:03:09,021 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:09,248 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:41245', status: init, memory: 0, processing: 0>
-2022-08-26 14:03:09,459 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:41245
-2022-08-26 14:03:09,459 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:09,459 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:40091
-2022-08-26 14:03:09,460 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:09,460 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40187', status: init, memory: 0, processing: 0>
-2022-08-26 14:03:09,460 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:09,460 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40187
-2022-08-26 14:03:09,460 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:09,461 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:40091
-2022-08-26 14:03:09,461 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:09,462 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:09,466 - distributed.scheduler - INFO - Receive client connection: Client-7d3036a0-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:03:09,467 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:09,600 - distributed.worker - WARNING - Compute Failed
-Key:       x
-Function:  throws
-args:      (1)
-kwargs:    {}
-Exception: "RuntimeError('hello!')"
-
-PASSED2022-08-26 14:03:09,614 - distributed.scheduler - INFO - Remove client Client-7d3036a0-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:03:09,614 - distributed.scheduler - INFO - Remove client Client-7d3036a0-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:03:09,614 - distributed.scheduler - INFO - Close client connection: Client-7d3036a0-2582-11ed-a99d-00d861bc4509
-
-distributed/tests/test_client.py::test_as_completed_list 2022-08-26 14:03:10,426 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:03:10,428 - distributed.scheduler - INFO - State start
-2022-08-26 14:03:10,431 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:03:10,431 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:33005
-2022-08-26 14:03:10,431 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:03:10,434 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-dpc0_tg9', purging
-2022-08-26 14:03:10,434 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-3tbtnca0', purging
-2022-08-26 14:03:10,434 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-r5cek5ug', purging
-2022-08-26 14:03:10,434 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-mgxtqefi', purging
-2022-08-26 14:03:10,441 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43395
-2022-08-26 14:03:10,441 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43395
-2022-08-26 14:03:10,441 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40433
-2022-08-26 14:03:10,441 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33005
-2022-08-26 14:03:10,441 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:10,441 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:03:10,441 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:03:10,441 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-5b6u8j3a
-2022-08-26 14:03:10,441 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:10,481 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34607
-2022-08-26 14:03:10,481 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34607
-2022-08-26 14:03:10,482 - distributed.worker - INFO -          dashboard at:            127.0.0.1:37327
-2022-08-26 14:03:10,482 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33005
-2022-08-26 14:03:10,482 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:10,482 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:03:10,482 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:03:10,482 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-owdrdslp
-2022-08-26 14:03:10,482 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:10,657 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:43395', status: init, memory: 0, processing: 0>
-2022-08-26 14:03:10,910 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:43395
-2022-08-26 14:03:10,910 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:10,910 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33005
-2022-08-26 14:03:10,910 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:10,911 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:34607', status: init, memory: 0, processing: 0>
-2022-08-26 14:03:10,911 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:10,911 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:34607
-2022-08-26 14:03:10,911 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:10,911 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33005
-2022-08-26 14:03:10,912 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:10,913 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:10,918 - distributed.scheduler - INFO - Receive client connection: Client-7e0da9fd-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:03:10,918 - distributed.core - INFO - Starting established connection
-PASSED2022-08-26 14:03:10,999 - distributed.scheduler - INFO - Remove client Client-7e0da9fd-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:03:10,999 - distributed.scheduler - INFO - Remove client Client-7e0da9fd-2582-11ed-a99d-00d861bc4509
-
-distributed/tests/test_client.py::test_as_completed_results 2022-08-26 14:03:11,818 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:03:11,821 - distributed.scheduler - INFO - State start
-2022-08-26 14:03:11,824 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:03:11,824 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:43933
-2022-08-26 14:03:11,824 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:03:11,826 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-7unuv_cx', purging
-2022-08-26 14:03:11,826 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-ri4awznv', purging
-2022-08-26 14:03:11,827 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-5b6u8j3a', purging
-2022-08-26 14:03:11,827 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-owdrdslp', purging
-2022-08-26 14:03:11,833 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33229
-2022-08-26 14:03:11,833 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33229
-2022-08-26 14:03:11,833 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36313
-2022-08-26 14:03:11,833 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43933
-2022-08-26 14:03:11,833 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:11,833 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:03:11,833 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:03:11,833 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-9ly64mfu
-2022-08-26 14:03:11,833 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:11,834 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35423
-2022-08-26 14:03:11,834 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35423
-2022-08-26 14:03:11,834 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45223
-2022-08-26 14:03:11,834 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43933
-2022-08-26 14:03:11,834 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:11,834 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:03:11,834 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:03:11,834 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-6d09yz3h
-2022-08-26 14:03:11,834 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:12,061 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:35423', status: init, memory: 0, processing: 0>
-2022-08-26 14:03:12,269 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:35423
-2022-08-26 14:03:12,269 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:12,269 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43933
-2022-08-26 14:03:12,269 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:12,270 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33229', status: init, memory: 0, processing: 0>
-2022-08-26 14:03:12,270 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:12,270 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33229
-2022-08-26 14:03:12,270 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:12,270 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43933
-2022-08-26 14:03:12,271 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:12,271 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:12,276 - distributed.scheduler - INFO - Receive client connection: Client-7edcea17-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:03:12,276 - distributed.core - INFO - Starting established connection
-PASSED2022-08-26 14:03:12,353 - distributed.scheduler - INFO - Remove client Client-7edcea17-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:03:12,354 - distributed.scheduler - INFO - Remove client Client-7edcea17-2582-11ed-a99d-00d861bc4509
-
-distributed/tests/test_client.py::test_as_completed_batches[True] 2022-08-26 14:03:13,171 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:03:13,173 - distributed.scheduler - INFO - State start
-2022-08-26 14:03:13,176 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:03:13,176 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:38089
-2022-08-26 14:03:13,176 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:03:13,179 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-wfwhyb7u', purging
-2022-08-26 14:03:13,179 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-rye9ie5z', purging
-2022-08-26 14:03:13,180 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-9ly64mfu', purging
-2022-08-26 14:03:13,180 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-6d09yz3h', purging
-2022-08-26 14:03:13,186 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34513
-2022-08-26 14:03:13,186 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34513
-2022-08-26 14:03:13,186 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34515
-2022-08-26 14:03:13,186 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:38089
-2022-08-26 14:03:13,186 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:13,186 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:03:13,186 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:03:13,187 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-2thwgw_3
-2022-08-26 14:03:13,187 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:13,187 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43527
-2022-08-26 14:03:13,187 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43527
-2022-08-26 14:03:13,187 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40537
-2022-08-26 14:03:13,187 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:38089
-2022-08-26 14:03:13,187 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:13,187 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:03:13,187 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:03:13,187 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-g93u_mhw
-2022-08-26 14:03:13,187 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:13,397 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:43527', status: init, memory: 0, processing: 0>
-2022-08-26 14:03:13,604 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:43527
-2022-08-26 14:03:13,605 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:13,605 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:38089
-2022-08-26 14:03:13,605 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:13,605 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:34513', status: init, memory: 0, processing: 0>
-2022-08-26 14:03:13,606 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:13,606 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:34513
-2022-08-26 14:03:13,606 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:13,606 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:38089
-2022-08-26 14:03:13,606 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:13,607 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:13,613 - distributed.scheduler - INFO - Receive client connection: Client-7fa8db7c-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:03:13,614 - distributed.core - INFO - Starting established connection
-PASSED2022-08-26 14:03:14,112 - distributed.scheduler - INFO - Remove client Client-7fa8db7c-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:03:14,113 - distributed.scheduler - INFO - Remove client Client-7fa8db7c-2582-11ed-a99d-00d861bc4509
-
-distributed/tests/test_client.py::test_as_completed_batches[False] 2022-08-26 14:03:14,938 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/node.py:183: UserWarning: Port 8787 is already in use.
-Perhaps you already have a cluster running?
-Hosting the HTTP server on port 41429 instead
-  warnings.warn(
-2022-08-26 14:03:14,940 - distributed.scheduler - INFO - State start
-2022-08-26 14:03:14,943 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:03:14,943 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:44417
-2022-08-26 14:03:14,943 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:41429
-2022-08-26 14:03:14,952 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45511
-2022-08-26 14:03:14,952 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45511
-2022-08-26 14:03:14,952 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41113
-2022-08-26 14:03:14,952 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44417
-2022-08-26 14:03:14,952 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:14,953 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:03:14,953 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:03:14,953 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-npfmsj00
-2022-08-26 14:03:14,953 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:14,957 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:41335
-2022-08-26 14:03:14,957 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:41335
-2022-08-26 14:03:14,957 - distributed.worker - INFO -          dashboard at:            127.0.0.1:32855
-2022-08-26 14:03:14,957 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44417
-2022-08-26 14:03:14,957 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:14,957 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:03:14,957 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:03:14,957 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-7compxdu
-2022-08-26 14:03:14,958 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:15,164 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:41335', status: init, memory: 0, processing: 0>
-2022-08-26 14:03:15,384 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:41335
-2022-08-26 14:03:15,384 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:15,384 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44417
-2022-08-26 14:03:15,384 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:15,385 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45511', status: init, memory: 0, processing: 0>
-2022-08-26 14:03:15,385 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:15,385 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45511
-2022-08-26 14:03:15,385 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:15,386 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44417
-2022-08-26 14:03:15,386 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:15,387 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:15,392 - distributed.scheduler - INFO - Receive client connection: Client-80b85902-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:03:15,392 - distributed.core - INFO - Starting established connection
-PASSED2022-08-26 14:03:15,889 - distributed.scheduler - INFO - Remove client Client-80b85902-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:03:15,889 - distributed.scheduler - INFO - Remove client Client-80b85902-2582-11ed-a99d-00d861bc4509
-
-distributed/tests/test_client.py::test_as_completed_next_batch 2022-08-26 14:03:16,719 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:03:16,722 - distributed.scheduler - INFO - State start
-2022-08-26 14:03:16,724 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:03:16,725 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:34661
-2022-08-26 14:03:16,725 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:03:16,727 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-bu1ohauw', purging
-2022-08-26 14:03:16,727 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-dseltagw', purging
-2022-08-26 14:03:16,733 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39749
-2022-08-26 14:03:16,733 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39749
-2022-08-26 14:03:16,733 - distributed.worker - INFO -          dashboard at:            127.0.0.1:37545
-2022-08-26 14:03:16,733 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34661
-2022-08-26 14:03:16,733 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:16,733 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:03:16,733 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:03:16,733 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-b3452h_i
-2022-08-26 14:03:16,733 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:16,734 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:44967
-2022-08-26 14:03:16,734 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:44967
-2022-08-26 14:03:16,734 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43195
-2022-08-26 14:03:16,734 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34661
-2022-08-26 14:03:16,734 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:16,734 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:03:16,734 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:03:16,734 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ud5ev6xf
-2022-08-26 14:03:16,734 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:16,969 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39749', status: init, memory: 0, processing: 0>
-2022-08-26 14:03:17,177 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39749
-2022-08-26 14:03:17,177 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:17,177 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34661
-2022-08-26 14:03:17,177 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:17,178 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:44967', status: init, memory: 0, processing: 0>
-2022-08-26 14:03:17,178 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:44967
-2022-08-26 14:03:17,178 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:17,178 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:17,178 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34661
-2022-08-26 14:03:17,179 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:17,180 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:17,185 - distributed.scheduler - INFO - Receive client connection: Client-81c9f0b6-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:03:17,185 - distributed.core - INFO - Starting established connection
-PASSED2022-08-26 14:03:17,357 - distributed.scheduler - INFO - Remove client Client-81c9f0b6-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:03:17,358 - distributed.scheduler - INFO - Remove client Client-81c9f0b6-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:03:17,358 - distributed.scheduler - INFO - Close client connection: Client-81c9f0b6-2582-11ed-a99d-00d861bc4509
-
-distributed/tests/test_client.py::test_status PASSED
-distributed/tests/test_client.py::test_persist_optimize_graph PASSED
-distributed/tests/test_client.py::test_scatter_raises_if_no_workers 2022-08-26 14:03:18,434 - distributed.core - ERROR - Exception while handling op scatter
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 770, in _handle_comm
-    result = await result
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/scheduler.py", line 5075, in scatter
-    raise TimeoutError("No valid workers found")
-asyncio.exceptions.TimeoutError: No valid workers found
-PASSED
-distributed/tests/test_client.py::test_reconnect 2022-08-26 14:03:18,661 - distributed.scheduler - CRITICAL - Closed comm <BatchedSend: closed> while trying to write [{'op': 'lost-data', 'key': 'inc-03d935909bba38f9a49655e867cbf56a'}]
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/scheduler.py", line 4888, in handle_worker
-    await self.handle_stream(comm=comm, extra={"worker": worker})
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 822, in handle_stream
-    msgs = await comm.read()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 225, in read
-    frames_nbytes = await stream.read_bytes(fmt_size)
-asyncio.exceptions.CancelledError
-
-During handling of the above exception, another exception occurred:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/scheduler.py", line 5020, in send_all
-    c.send(*msgs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/batched.py", line 156, in send
-    raise CommClosedError(f"Comm {self.comm!r} already closed.")
-distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:50011 remote=tcp://127.0.0.1:45088> already closed.
-2022-08-26 14:03:18,662 - distributed.scheduler - ERROR - Cannot schedule a new coroutine function as the group is already closed.
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/scheduler.py", line 4888, in handle_worker
-    await self.handle_stream(comm=comm, extra={"worker": worker})
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 822, in handle_stream
-    msgs = await comm.read()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 225, in read
-    frames_nbytes = await stream.read_bytes(fmt_size)
-asyncio.exceptions.CancelledError
-
-During handling of the above exception, another exception occurred:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/utils.py", line 799, in wrapper
-    return await func(*args, **kwargs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/scheduler.py", line 4358, in remove_worker
-    self._ongoing_background_tasks.call_later(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 231, in call_later
-    self.call_soon(_delayed(afunc, delay), *args, **kwargs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 190, in call_soon
-    raise AsyncTaskGroupClosedError(
-distributed.core.AsyncTaskGroupClosedError: Cannot schedule a new coroutine function as the group is already closed.
-2022-08-26 14:03:18,662 - distributed.core - ERROR - Cannot schedule a new coroutine function as the group is already closed.
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/scheduler.py", line 4888, in handle_worker
-    await self.handle_stream(comm=comm, extra={"worker": worker})
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 822, in handle_stream
-    msgs = await comm.read()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 225, in read
-    frames_nbytes = await stream.read_bytes(fmt_size)
-asyncio.exceptions.CancelledError
-
-During handling of the above exception, another exception occurred:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/utils.py", line 799, in wrapper
-    return await func(*args, **kwargs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/scheduler.py", line 3763, in add_worker
-    await self.handle_worker(comm, address)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/scheduler.py", line 4892, in handle_worker
-    await self.remove_worker(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/utils.py", line 799, in wrapper
-    return await func(*args, **kwargs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/scheduler.py", line 4358, in remove_worker
-    self._ongoing_background_tasks.call_later(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 231, in call_later
-    self.call_soon(_delayed(afunc, delay), *args, **kwargs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 190, in call_soon
-    raise AsyncTaskGroupClosedError(
-distributed.core.AsyncTaskGroupClosedError: Cannot schedule a new coroutine function as the group is already closed.
-2022-08-26 14:03:18,662 - distributed.core - ERROR - Exception while handling op register-worker
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/scheduler.py", line 4888, in handle_worker
-    await self.handle_stream(comm=comm, extra={"worker": worker})
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 822, in handle_stream
-    msgs = await comm.read()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 225, in read
-    frames_nbytes = await stream.read_bytes(fmt_size)
-asyncio.exceptions.CancelledError
-
-During handling of the above exception, another exception occurred:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 770, in _handle_comm
-    result = await result
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/utils.py", line 799, in wrapper
-    return await func(*args, **kwargs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/scheduler.py", line 3763, in add_worker
-    await self.handle_worker(comm, address)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/scheduler.py", line 4892, in handle_worker
-    await self.remove_worker(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/utils.py", line 799, in wrapper
-    return await func(*args, **kwargs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/scheduler.py", line 4358, in remove_worker
-    self._ongoing_background_tasks.call_later(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 231, in call_later
-    self.call_soon(_delayed(afunc, delay), *args, **kwargs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 190, in call_soon
-    raise AsyncTaskGroupClosedError(
-distributed.core.AsyncTaskGroupClosedError: Cannot schedule a new coroutine function as the group is already closed.
-2022-08-26 14:03:18,781 - distributed.scheduler - CRITICAL - Closed comm <BatchedSend: closed> while trying to write [{'op': 'lost-data', 'key': 'inc-03d935909bba38f9a49655e867cbf56a'}]
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/scheduler.py", line 4888, in handle_worker
-    await self.handle_stream(comm=comm, extra={"worker": worker})
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 822, in handle_stream
-    msgs = await comm.read()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 225, in read
-    frames_nbytes = await stream.read_bytes(fmt_size)
-asyncio.exceptions.CancelledError
-
-During handling of the above exception, another exception occurred:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/scheduler.py", line 5020, in send_all
-    c.send(*msgs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/batched.py", line 156, in send
-    raise CommClosedError(f"Comm {self.comm!r} already closed.")
-distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:50011 remote=tcp://127.0.0.1:45100> already closed.
-2022-08-26 14:03:18,781 - distributed.scheduler - ERROR - Cannot schedule a new coroutine function as the group is already closed.
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/scheduler.py", line 4888, in handle_worker
-    await self.handle_stream(comm=comm, extra={"worker": worker})
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 822, in handle_stream
-    msgs = await comm.read()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 225, in read
-    frames_nbytes = await stream.read_bytes(fmt_size)
-asyncio.exceptions.CancelledError
-
-During handling of the above exception, another exception occurred:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/utils.py", line 799, in wrapper
-    return await func(*args, **kwargs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/scheduler.py", line 4358, in remove_worker
-    self._ongoing_background_tasks.call_later(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 231, in call_later
-    self.call_soon(_delayed(afunc, delay), *args, **kwargs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 190, in call_soon
-    raise AsyncTaskGroupClosedError(
-distributed.core.AsyncTaskGroupClosedError: Cannot schedule a new coroutine function as the group is already closed.
-2022-08-26 14:03:18,781 - distributed.core - ERROR - Cannot schedule a new coroutine function as the group is already closed.
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/scheduler.py", line 4888, in handle_worker
-    await self.handle_stream(comm=comm, extra={"worker": worker})
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 822, in handle_stream
-    msgs = await comm.read()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 225, in read
-    frames_nbytes = await stream.read_bytes(fmt_size)
-asyncio.exceptions.CancelledError
-
-During handling of the above exception, another exception occurred:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/utils.py", line 799, in wrapper
-    return await func(*args, **kwargs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/scheduler.py", line 3763, in add_worker
-    await self.handle_worker(comm, address)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/scheduler.py", line 4892, in handle_worker
-    await self.remove_worker(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/utils.py", line 799, in wrapper
-    return await func(*args, **kwargs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/scheduler.py", line 4358, in remove_worker
-    self._ongoing_background_tasks.call_later(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 231, in call_later
-    self.call_soon(_delayed(afunc, delay), *args, **kwargs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 190, in call_soon
-    raise AsyncTaskGroupClosedError(
-distributed.core.AsyncTaskGroupClosedError: Cannot schedule a new coroutine function as the group is already closed.
-2022-08-26 14:03:18,782 - distributed.core - ERROR - Exception while handling op register-worker
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/scheduler.py", line 4888, in handle_worker
-    await self.handle_stream(comm=comm, extra={"worker": worker})
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 822, in handle_stream
-    msgs = await comm.read()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 225, in read
-    frames_nbytes = await stream.read_bytes(fmt_size)
-asyncio.exceptions.CancelledError
-
-During handling of the above exception, another exception occurred:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 770, in _handle_comm
-    result = await result
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/utils.py", line 799, in wrapper
-    return await func(*args, **kwargs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/scheduler.py", line 3763, in add_worker
-    await self.handle_worker(comm, address)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/scheduler.py", line 4892, in handle_worker
-    await self.remove_worker(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/utils.py", line 799, in wrapper
-    return await func(*args, **kwargs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/scheduler.py", line 4358, in remove_worker
-    self._ongoing_background_tasks.call_later(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 231, in call_later
-    self.call_soon(_delayed(afunc, delay), *args, **kwargs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 190, in call_soon
-    raise AsyncTaskGroupClosedError(
-distributed.core.AsyncTaskGroupClosedError: Cannot schedule a new coroutine function as the group is already closed.
-2022-08-26 14:03:18,885 - distributed.client - ERROR - 
-ConnectionRefusedError: [Errno 111] Connection refused
-
-The above exception was the direct cause of the following exception:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/core.py", line 291, in connect
-    comm = await asyncio.wait_for(
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/tasks.py", line 445, in wait_for
-    return fut.result()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 496, in connect
-    convert_stream_closed_error(self, e)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 142, in convert_stream_closed_error
-    raise CommClosedError(f"in {obj}: {exc.__class__.__name__}: {exc}") from exc
-distributed.comm.core.CommClosedError: in <distributed.comm.tcp.TCPConnector object at 0x56403f4e0220>: ConnectionRefusedError: [Errno 111] Connection refused
-
-During handling of the above exception, another exception occurred:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/utils.py", line 799, in wrapper
-    return await func(*args, **kwargs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/client.py", line 1246, in _reconnect
-    await self._ensure_connected(timeout=timeout)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/client.py", line 1276, in _ensure_connected
-    comm = await connect(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/core.py", line 315, in connect
-    await asyncio.sleep(backoff)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/tasks.py", line 605, in sleep
-    return await future
-asyncio.exceptions.CancelledError
-PASSED
-distributed/tests/test_client.py::test_reconnect_timeout PASSED
-distributed/tests/test_client.py::test_open_close_many_workers[Worker-100-5] SKIPPED
-distributed/tests/test_client.py::test_open_close_many_workers[Nanny-10-20] SKIPPED
-distributed/tests/test_client.py::test_idempotence 2022-08-26 14:03:19,848 - distributed.worker - WARNING - Compute Failed
-Key:       div-beaac0206246b34d3625d21194e03c13
-Function:  div
-args:      (1, 0)
-kwargs:    {}
-Exception: "ZeroDivisionError('division by zero')"
-
-PASSED
-distributed/tests/test_client.py::test_scheduler_info 2022-08-26 14:03:20,982 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/node.py:183: UserWarning: Port 8787 is already in use.
-Perhaps you already have a cluster running?
-Hosting the HTTP server on port 46507 instead
-  warnings.warn(
-2022-08-26 14:03:20,984 - distributed.scheduler - INFO - State start
-2022-08-26 14:03:20,987 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:03:20,988 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:38911
-2022-08-26 14:03:20,988 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:46507
-2022-08-26 14:03:20,996 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37921
-2022-08-26 14:03:20,996 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37921
-2022-08-26 14:03:20,996 - distributed.worker - INFO -          dashboard at:            127.0.0.1:38079
-2022-08-26 14:03:20,996 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:38911
-2022-08-26 14:03:20,996 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:20,996 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:03:20,996 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:03:20,997 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-kp8mql7y
-2022-08-26 14:03:20,997 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:21,019 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35733
-2022-08-26 14:03:21,019 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35733
-2022-08-26 14:03:21,019 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43859
-2022-08-26 14:03:21,019 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:38911
-2022-08-26 14:03:21,019 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:21,019 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:03:21,019 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:03:21,019 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-uc3_4stt
-2022-08-26 14:03:21,019 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:21,211 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37921', status: init, memory: 0, processing: 0>
-2022-08-26 14:03:21,436 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37921
-2022-08-26 14:03:21,437 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:21,437 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:38911
-2022-08-26 14:03:21,437 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:21,437 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:35733', status: init, memory: 0, processing: 0>
-2022-08-26 14:03:21,438 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:35733
-2022-08-26 14:03:21,438 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:21,438 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:21,438 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:38911
-2022-08-26 14:03:21,438 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:21,439 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:21,445 - distributed.scheduler - INFO - Receive client connection: Client-8453f232-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:03:21,446 - distributed.core - INFO - Starting established connection
-PASSED2022-08-26 14:03:21,459 - distributed.scheduler - INFO - Remove client Client-8453f232-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:03:21,459 - distributed.scheduler - INFO - Remove client Client-8453f232-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:03:21,460 - distributed.scheduler - INFO - Close client connection: Client-8453f232-2582-11ed-a99d-00d861bc4509
-
-distributed/tests/test_client.py::test_write_scheduler_file 2022-08-26 14:03:22,282 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:03:22,284 - distributed.scheduler - INFO - State start
-2022-08-26 14:03:22,287 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:03:22,287 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:44359
-2022-08-26 14:03:22,287 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:03:22,297 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42937
-2022-08-26 14:03:22,297 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42937
-2022-08-26 14:03:22,297 - distributed.worker - INFO -          dashboard at:            127.0.0.1:44649
-2022-08-26 14:03:22,297 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44359
-2022-08-26 14:03:22,297 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:22,297 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:03:22,297 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:03:22,297 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-uxw1mvsp
-2022-08-26 14:03:22,297 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:22,297 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40925
-2022-08-26 14:03:22,297 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40925
-2022-08-26 14:03:22,297 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40553
-2022-08-26 14:03:22,297 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44359
-2022-08-26 14:03:22,297 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:22,297 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:03:22,297 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:03:22,297 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-34xw98ia
-2022-08-26 14:03:22,297 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:22,503 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42937', status: init, memory: 0, processing: 0>
-2022-08-26 14:03:22,712 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42937
-2022-08-26 14:03:22,712 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:22,712 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44359
-2022-08-26 14:03:22,713 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:22,713 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40925', status: init, memory: 0, processing: 0>
-2022-08-26 14:03:22,713 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:22,714 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40925
-2022-08-26 14:03:22,714 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:22,714 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44359
-2022-08-26 14:03:22,714 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:22,715 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:22,721 - distributed.scheduler - INFO - Receive client connection: Client-85168c80-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:03:22,721 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:22,727 - distributed.scheduler - INFO - Receive client connection: Client-851790a2-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:03:22,727 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:22,784 - distributed.scheduler - INFO - Remove client Client-851790a2-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:03:22,784 - distributed.scheduler - INFO - Remove client Client-851790a2-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:03:22,784 - distributed.scheduler - INFO - Close client connection: Client-851790a2-2582-11ed-a99d-00d861bc4509
-PASSED2022-08-26 14:03:22,785 - distributed.scheduler - INFO - Remove client Client-85168c80-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:03:22,786 - distributed.scheduler - INFO - Remove client Client-85168c80-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:03:22,786 - distributed.scheduler - INFO - Close client connection: Client-85168c80-2582-11ed-a99d-00d861bc4509
-
-distributed/tests/test_client.py::test_get_versions_sync 2022-08-26 14:03:23,629 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/node.py:183: UserWarning: Port 8787 is already in use.
-Perhaps you already have a cluster running?
-Hosting the HTTP server on port 46323 instead
-  warnings.warn(
-2022-08-26 14:03:23,632 - distributed.scheduler - INFO - State start
-2022-08-26 14:03:23,634 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:03:23,634 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:46543
-2022-08-26 14:03:23,635 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:46323
-2022-08-26 14:03:23,643 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34145
-2022-08-26 14:03:23,643 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:44179
-2022-08-26 14:03:23,644 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34145
-2022-08-26 14:03:23,644 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:44179
-2022-08-26 14:03:23,644 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35615
-2022-08-26 14:03:23,644 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:46543
-2022-08-26 14:03:23,644 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36889
-2022-08-26 14:03:23,644 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:23,644 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:46543
-2022-08-26 14:03:23,644 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:03:23,644 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:23,644 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:03:23,644 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:03:23,644 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-uebvkjld
-2022-08-26 14:03:23,644 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:03:23,644 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-5q18b4rr
-2022-08-26 14:03:23,644 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:23,644 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:23,856 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:34145', status: init, memory: 0, processing: 0>
-2022-08-26 14:03:24,063 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:34145
-2022-08-26 14:03:24,064 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:24,064 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:46543
-2022-08-26 14:03:24,064 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:24,064 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:44179', status: init, memory: 0, processing: 0>
-2022-08-26 14:03:24,065 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:44179
-2022-08-26 14:03:24,065 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:24,065 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:24,065 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:46543
-2022-08-26 14:03:24,065 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:24,066 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:24,072 - distributed.scheduler - INFO - Receive client connection: Client-85e4b3fd-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:03:24,072 - distributed.core - INFO - Starting established connection
-PASSED2022-08-26 14:03:24,223 - distributed.scheduler - INFO - Remove client Client-85e4b3fd-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:03:24,223 - distributed.scheduler - INFO - Remove client Client-85e4b3fd-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:03:24,223 - distributed.scheduler - INFO - Close client connection: Client-85e4b3fd-2582-11ed-a99d-00d861bc4509
-
-distributed/tests/test_client.py::test_get_versions_async PASSED
-distributed/tests/test_client.py::test_get_versions_rpc_error 2022-08-26 14:03:24,704 - distributed.scheduler - ERROR - broadcast to tcp://127.0.0.1:33991 failed: OSError: Timed out trying to connect to tcp://127.0.0.1:33991 after 0.2 s
-PASSED
-distributed/tests/test_client.py::test_threaded_get_within_distributed 2022-08-26 14:03:25,693 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:03:25,695 - distributed.scheduler - INFO - State start
-2022-08-26 14:03:25,698 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:03:25,698 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:41049
-2022-08-26 14:03:25,698 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:03:25,707 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:44469
-2022-08-26 14:03:25,707 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:44469
-2022-08-26 14:03:25,707 - distributed.worker - INFO -          dashboard at:            127.0.0.1:32833
-2022-08-26 14:03:25,707 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:41049
-2022-08-26 14:03:25,707 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:25,707 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:03:25,707 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:03:25,707 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-vm3nmp51
-2022-08-26 14:03:25,707 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:25,723 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38237
-2022-08-26 14:03:25,724 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38237
-2022-08-26 14:03:25,724 - distributed.worker - INFO -          dashboard at:            127.0.0.1:38133
-2022-08-26 14:03:25,724 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:41049
-2022-08-26 14:03:25,724 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:25,724 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:03:25,724 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:03:25,724 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-b3bolbm8
-2022-08-26 14:03:25,724 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:25,932 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38237', status: init, memory: 0, processing: 0>
-2022-08-26 14:03:26,145 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38237
-2022-08-26 14:03:26,145 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:26,145 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:41049
-2022-08-26 14:03:26,146 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:26,146 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:44469', status: init, memory: 0, processing: 0>
-2022-08-26 14:03:26,146 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:44469
-2022-08-26 14:03:26,146 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:26,147 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:26,147 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:41049
-2022-08-26 14:03:26,147 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:26,148 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:26,153 - distributed.scheduler - INFO - Receive client connection: Client-87224d94-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:03:26,153 - distributed.core - INFO - Starting established connection
-PASSED2022-08-26 14:03:26,476 - distributed.scheduler - INFO - Remove client Client-87224d94-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:03:26,476 - distributed.scheduler - INFO - Remove client Client-87224d94-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:03:26,476 - distributed.scheduler - INFO - Close client connection: Client-87224d94-2582-11ed-a99d-00d861bc4509
-
-distributed/tests/test_client.py::test_lose_scattered_data PASSED
-distributed/tests/test_client.py::test_partially_lose_scattered_data PASSED
-distributed/tests/test_client.py::test_scatter_compute_lose PASSED
-distributed/tests/test_client.py::test_scatter_compute_store_lose PASSED
-distributed/tests/test_client.py::test_scatter_compute_store_lose_processing PASSED
-distributed/tests/test_client.py::test_serialize_future PASSED
-distributed/tests/test_client.py::test_temp_default_client PASSED
-distributed/tests/test_client.py::test_as_current PASSED
-distributed/tests/test_client.py::test_as_current_is_thread_local 2022-08-26 14:03:30,121 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/node.py:183: UserWarning: Port 8787 is already in use.
-Perhaps you already have a cluster running?
-Hosting the HTTP server on port 33377 instead
-  warnings.warn(
-2022-08-26 14:03:30,123 - distributed.scheduler - INFO - State start
-2022-08-26 14:03:30,126 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:03:30,126 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:37623
-2022-08-26 14:03:30,126 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:33377
-2022-08-26 14:03:30,135 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45609
-2022-08-26 14:03:30,135 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45609
-2022-08-26 14:03:30,135 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40701
-2022-08-26 14:03:30,135 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37623
-2022-08-26 14:03:30,135 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:30,136 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:03:30,136 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:03:30,136 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-g1siqj_c
-2022-08-26 14:03:30,136 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:30,364 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33839
-2022-08-26 14:03:30,364 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33839
-2022-08-26 14:03:30,364 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46091
-2022-08-26 14:03:30,364 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37623
-2022-08-26 14:03:30,364 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:30,364 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:03:30,364 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:03:30,364 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-dcihdjfj
-2022-08-26 14:03:30,364 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:30,406 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45609', status: init, memory: 0, processing: 0>
-2022-08-26 14:03:30,664 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45609
-2022-08-26 14:03:30,665 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:30,665 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37623
-2022-08-26 14:03:30,665 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:30,665 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33839', status: init, memory: 0, processing: 0>
-2022-08-26 14:03:30,666 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33839
-2022-08-26 14:03:30,666 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:30,666 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:30,666 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37623
-2022-08-26 14:03:30,666 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:30,667 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:30,673 - distributed.scheduler - INFO - Receive client connection: Client-89d3ebba-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:03:30,673 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:30,674 - distributed.scheduler - INFO - Receive client connection: Client-89d3fa38-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:03:30,674 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:30,686 - distributed.scheduler - INFO - Remove client Client-89d3ebba-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:03:30,686 - distributed.scheduler - INFO - Remove client Client-89d3ebba-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:03:30,686 - distributed.scheduler - INFO - Remove client Client-89d3fa38-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:03:30,686 - distributed.scheduler - INFO - Remove client Client-89d3fa38-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:03:30,686 - distributed.scheduler - INFO - Close client connection: Client-89d3ebba-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:03:30,686 - distributed.scheduler - INFO - Close client connection: Client-89d3fa38-2582-11ed-a99d-00d861bc4509
-PASSED
-distributed/tests/test_client.py::test_as_current_is_task_local PASSED
-distributed/tests/test_client.py::test_persist_workers_annotate PASSED
-distributed/tests/test_client.py::test_persist_workers_annotate2 PASSED
-distributed/tests/test_client.py::test_persist_workers PASSED
-distributed/tests/test_client.py::test_compute_workers_annotate PASSED
-distributed/tests/test_client.py::test_compute_workers PASSED
-distributed/tests/test_client.py::test_compute_nested_containers PASSED
-distributed/tests/test_client.py::test_scatter_type PASSED
-distributed/tests/test_client.py::test_retire_workers_2 PASSED
-distributed/tests/test_client.py::test_retire_many_workers PASSED
-distributed/tests/test_client.py::test_weight_occupancy_against_data_movement PASSED
-distributed/tests/test_client.py::test_distribute_tasks_by_nthreads PASSED
-distributed/tests/test_client.py::test_add_done_callback 2022-08-26 14:03:34,024 - distributed.worker - WARNING - Compute Failed
-Key:       v
-Function:  throws
-args:      ('hello')
-kwargs:    {}
-Exception: "RuntimeError('hello!')"
-
-PASSED
-distributed/tests/test_client.py::test_normalize_collection PASSED
-distributed/tests/test_client.py::test_normalize_collection_dask_array PASSED
-distributed/tests/test_client.py::test_normalize_collection_with_released_futures SKIPPED
-distributed/tests/test_client.py::test_auto_normalize_collection XPASS
-distributed/tests/test_client.py::test_auto_normalize_collection_sync 2022-08-26 14:03:36,160 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:03:36,163 - distributed.scheduler - INFO - State start
-2022-08-26 14:03:36,166 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:03:36,166 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:35727
-2022-08-26 14:03:36,166 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:03:36,175 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45211
-2022-08-26 14:03:36,175 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45211
-2022-08-26 14:03:36,175 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35219
-2022-08-26 14:03:36,175 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35727
-2022-08-26 14:03:36,175 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:36,175 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:03:36,175 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:03:36,175 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-56hk2cup
-2022-08-26 14:03:36,175 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:36,176 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36223
-2022-08-26 14:03:36,176 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36223
-2022-08-26 14:03:36,176 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41741
-2022-08-26 14:03:36,176 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35727
-2022-08-26 14:03:36,176 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:36,176 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:03:36,176 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:03:36,176 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-yiin1ak_
-2022-08-26 14:03:36,176 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:36,436 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36223', status: init, memory: 0, processing: 0>
-2022-08-26 14:03:36,687 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36223
-2022-08-26 14:03:36,687 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:36,687 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35727
-2022-08-26 14:03:36,688 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:36,688 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45211', status: init, memory: 0, processing: 0>
-2022-08-26 14:03:36,688 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45211
-2022-08-26 14:03:36,688 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:36,688 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:36,689 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35727
-2022-08-26 14:03:36,689 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:36,690 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:36,694 - distributed.scheduler - INFO - Receive client connection: Client-8d6ae129-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:03:36,695 - distributed.core - INFO - Starting established connection
-XPASS2022-08-26 14:03:36,746 - distributed.scheduler - INFO - Remove client Client-8d6ae129-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:03:36,746 - distributed.scheduler - INFO - Remove client Client-8d6ae129-2582-11ed-a99d-00d861bc4509
-
-distributed/tests/test_client.py::test_interleave_computations PASSED
-distributed/tests/test_client.py::test_interleave_computations_map SKIPPED
-distributed/tests/test_client.py::test_scatter_dict_workers PASSED
-distributed/tests/test_client.py::test_client_timeout SKIPPED (need ...)
-distributed/tests/test_client.py::test_submit_list_kwargs 2022-08-26 14:03:38,200 - distributed.utils_perf - WARNING - full garbage collections took 77% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_map_list_kwargs 2022-08-26 14:03:38,467 - distributed.utils_perf - WARNING - full garbage collections took 77% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_recreate_error_delayed 2022-08-26 14:03:38,519 - distributed.worker - WARNING - Compute Failed
-Key:       div-457d11cf-cbbc-4240-b45d-377f0f7867b5
-Function:  div
-args:      (1, 0)
-kwargs:    {}
-Exception: "ZeroDivisionError('division by zero')"
-
-2022-08-26 14:03:38,728 - distributed.utils_perf - WARNING - full garbage collections took 77% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_recreate_error_futures 2022-08-26 14:03:38,780 - distributed.worker - WARNING - Compute Failed
-Key:       div-48cda9510d2a9fa613ab34def8d894c6
-Function:  div
-args:      (1, 0)
-kwargs:    {}
-Exception: "ZeroDivisionError('division by zero')"
-
-2022-08-26 14:03:38,991 - distributed.utils_perf - WARNING - full garbage collections took 77% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_recreate_error_collection 2022-08-26 14:03:39,040 - distributed.worker - WARNING - Compute Failed
-Key:       ('range-lambda-b2643e81f0405ffd724a322a8742f590', 0)
-Function:  execute_task
-args:      ((<function reify at 0x56403c29c420>, (<function map_chunk at 0x56403c29d800>, <function test_recreate_error_collection.<locals>.<lambda> at 0x5640360539e0>, [(<class 'range'>, 0, 2)], None, {})))
-kwargs:    {}
-Exception: "ZeroDivisionError('division by zero')"
-
-2022-08-26 14:03:39,077 - distributed.worker - WARNING - Compute Failed
-Key:       ('map-6564c9c40b6299d2d0e930ba336a465c', 0)
-Function:  map
-args:      (0    0
-1    1
-Name: a, dtype: int64, <function test_recreate_error_collection.<locals>.make_err at 0x5640360563d0>, None)
-kwargs:    {}
-Exception: 'ValueError()'
-
-2022-08-26 14:03:39,343 - distributed.utils_perf - WARNING - full garbage collections took 76% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_recreate_error_array 2022-08-26 14:03:39,423 - distributed.worker - WARNING - Compute Failed
-Key:       ('solve-triangular-sum-sum-aggregate-c506b22963bfbfc41df1462444c72b22',)
-Function:  execute_task
-args:      ((Compose(functools.partial(<function sum at 0x5640365aa020>, dtype=dtype('float64'), axis=(0, 1), keepdims=False), functools.partial(<function _concatenate2 at 0x564037768490>, axes=[0, 1])), [[(subgraph_callable-c84163be-eaf0-4b66-a3cb-ddebe58f938a, (<function solve_triangular_safe at 0x56403772f890>, array([[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
-       [0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
-       [0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
-       [0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
-       [0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
-       [0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
-       [0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
-       [0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
-       [0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
-       [0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]]), array([[1., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
-       [0., 1., 0., 0., 0., 0., 0., 0., 0., 0.],
-       [0., 0., 1., 0., 0., 0., 0., 0., 0., 0.],
-       [0., 0., 0., 1., 0., 0., 0., 0., 0., 0.],
-       
-kwargs:    {}
-Exception: "LinAlgError('singular matrix: resolution failed at diagonal 0')"
-
-2022-08-26 14:03:39,671 - distributed.utils_perf - WARNING - full garbage collections took 76% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_recreate_error_sync 2022-08-26 14:03:40,476 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:03:40,478 - distributed.scheduler - INFO - State start
-2022-08-26 14:03:40,481 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:03:40,481 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:38323
-2022-08-26 14:03:40,481 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:03:40,489 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39241
-2022-08-26 14:03:40,489 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39241
-2022-08-26 14:03:40,489 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35611
-2022-08-26 14:03:40,489 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:38323
-2022-08-26 14:03:40,489 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:40,489 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:03:40,489 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:03:40,489 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-futatohu
-2022-08-26 14:03:40,489 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:40,496 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:46643
-2022-08-26 14:03:40,496 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:46643
-2022-08-26 14:03:40,496 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41795
-2022-08-26 14:03:40,496 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:38323
-2022-08-26 14:03:40,496 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:40,496 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:03:40,496 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:03:40,496 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-hmd2yj9u
-2022-08-26 14:03:40,496 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:40,747 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:46643', status: init, memory: 0, processing: 0>
-2022-08-26 14:03:40,995 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:46643
-2022-08-26 14:03:40,995 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:40,995 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:38323
-2022-08-26 14:03:40,996 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:40,996 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39241', status: init, memory: 0, processing: 0>
-2022-08-26 14:03:40,996 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:40,996 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39241
-2022-08-26 14:03:40,996 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:40,997 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:38323
-2022-08-26 14:03:40,997 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:40,998 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:41,002 - distributed.scheduler - INFO - Receive client connection: Client-8ffc3cde-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:03:41,002 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:41,094 - distributed.worker - WARNING - Compute Failed
-Key:       div-48cda9510d2a9fa613ab34def8d894c6
-Function:  div
-args:      (1, 0)
-kwargs:    {}
-Exception: "ZeroDivisionError('division by zero')"
-
-PASSED2022-08-26 14:03:41,109 - distributed.scheduler - INFO - Remove client Client-8ffc3cde-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:03:41,109 - distributed.scheduler - INFO - Remove client Client-8ffc3cde-2582-11ed-a99d-00d861bc4509
-
-distributed/tests/test_client.py::test_recreate_error_not_error 2022-08-26 14:03:41,918 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:03:41,920 - distributed.scheduler - INFO - State start
-2022-08-26 14:03:41,923 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:03:41,923 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:46059
-2022-08-26 14:03:41,923 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:03:41,927 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-futatohu', purging
-2022-08-26 14:03:41,928 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-hmd2yj9u', purging
-2022-08-26 14:03:41,933 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38165
-2022-08-26 14:03:41,933 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38165
-2022-08-26 14:03:41,933 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45621
-2022-08-26 14:03:41,933 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:46059
-2022-08-26 14:03:41,933 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:41,933 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:03:41,933 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:03:41,933 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-0bzl4qgh
-2022-08-26 14:03:41,933 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:41,933 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35993
-2022-08-26 14:03:41,934 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35993
-2022-08-26 14:03:41,934 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35033
-2022-08-26 14:03:41,934 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:46059
-2022-08-26 14:03:41,934 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:41,934 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:03:41,934 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:03:41,934 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-1g8xwon8
-2022-08-26 14:03:41,934 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:42,201 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:35993', status: init, memory: 0, processing: 0>
-2022-08-26 14:03:42,448 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:35993
-2022-08-26 14:03:42,448 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:42,448 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:46059
-2022-08-26 14:03:42,449 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:42,449 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38165', status: init, memory: 0, processing: 0>
-2022-08-26 14:03:42,449 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:42,450 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38165
-2022-08-26 14:03:42,450 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:42,450 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:46059
-2022-08-26 14:03:42,450 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:42,451 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:42,455 - distributed.scheduler - INFO - Receive client connection: Client-90d9ed1a-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:03:42,456 - distributed.core - INFO - Starting established connection
-PASSED2022-08-26 14:03:42,479 - distributed.scheduler - INFO - Remove client Client-90d9ed1a-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:03:42,479 - distributed.scheduler - INFO - Remove client Client-90d9ed1a-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:03:42,479 - distributed.scheduler - INFO - Close client connection: Client-90d9ed1a-2582-11ed-a99d-00d861bc4509
-
-distributed/tests/test_client.py::test_recreate_task_delayed 2022-08-26 14:03:42,772 - distributed.utils_perf - WARNING - full garbage collections took 76% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_recreate_task_futures 2022-08-26 14:03:43,031 - distributed.utils_perf - WARNING - full garbage collections took 76% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_recreate_task_collection 2022-08-26 14:03:43,363 - distributed.utils_perf - WARNING - full garbage collections took 75% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_recreate_task_array 2022-08-26 14:03:43,626 - distributed.utils_perf - WARNING - full garbage collections took 75% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_recreate_task_sync 2022-08-26 14:03:44,434 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:03:44,436 - distributed.scheduler - INFO - State start
-2022-08-26 14:03:44,439 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:03:44,439 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:35561
-2022-08-26 14:03:44,439 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:03:44,447 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37943
-2022-08-26 14:03:44,447 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37943
-2022-08-26 14:03:44,447 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40881
-2022-08-26 14:03:44,447 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43607
-2022-08-26 14:03:44,447 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43607
-2022-08-26 14:03:44,447 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35561
-2022-08-26 14:03:44,447 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:44,447 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33407
-2022-08-26 14:03:44,447 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:03:44,447 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35561
-2022-08-26 14:03:44,447 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:44,447 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:03:44,447 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:03:44,447 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-0jymwvfx
-2022-08-26 14:03:44,447 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:03:44,447 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-jiskf4bl
-2022-08-26 14:03:44,447 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:44,447 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:44,698 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:43607', status: init, memory: 0, processing: 0>
-2022-08-26 14:03:44,946 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:43607
-2022-08-26 14:03:44,946 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:44,946 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35561
-2022-08-26 14:03:44,946 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:44,947 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37943', status: init, memory: 0, processing: 0>
-2022-08-26 14:03:44,947 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37943
-2022-08-26 14:03:44,947 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:44,947 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:44,947 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35561
-2022-08-26 14:03:44,948 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:44,948 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:44,953 - distributed.scheduler - INFO - Receive client connection: Client-92570655-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:03:44,953 - distributed.core - INFO - Starting established connection
-PASSED2022-08-26 14:03:44,992 - distributed.scheduler - INFO - Remove client Client-92570655-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:03:44,992 - distributed.scheduler - INFO - Remove client Client-92570655-2582-11ed-a99d-00d861bc4509
-
-distributed/tests/test_client.py::test_retire_workers 2022-08-26 14:03:45,231 - distributed.utils_perf - WARNING - full garbage collections took 75% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_robust_unserializable 2022-08-26 14:03:45,491 - distributed.utils_perf - WARNING - full garbage collections took 75% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_robust_undeserializable 2022-08-26 14:03:45,535 - distributed.worker - ERROR - Could not deserialize task identity-8536e9ae06a34fdd9b98e1186f569102
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 2161, in execute
-    function, args, kwargs = await self._maybe_deserialize_task(ts)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 2134, in _maybe_deserialize_task
-    function, args, kwargs = _deserialize(*ts.run_spec)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 2757, in _deserialize
-    args = pickle.loads(args)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/pickle.py", line 73, in loads
-    return pickle.loads(x)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/tests/test_client.py", line 4847, in __setstate__
-    raise MyException("hello")
-test_client.MyException: hello
-2022-08-26 14:03:45,753 - distributed.utils_perf - WARNING - full garbage collections took 75% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_robust_undeserializable_function 2022-08-26 14:03:45,798 - distributed.worker - ERROR - Could not deserialize task <test_client.test_robust_undeserializable_function-3eba3fb68cf960de02eb9a262f91853e
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 2744, in loads_function
-    result = cache_loads[bytes_object]
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/collections.py", line 23, in __getitem__
-    value = super().__getitem__(key)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/collections/__init__.py", line 1106, in __getitem__
-    raise KeyError(key)
-KeyError: b"\x80\x04\x95\x11\x05\x00\x00\x00\x00\x00\x00\x8c\x17cloudpickle.cloudpickle\x94\x8c\x14_make_skeleton_class\x94\x93\x94(\x8c\x08builtins\x94\x8c\x04type\x94\x93\x94\x8c\x03Foo\x94h\x03\x8c\x06object\x94\x93\x94\x85\x94}\x94\x8c 904e1b42691c4e1bb3256614361887fb\x94Nt\x94R\x94\x8c\x1ccloudpickle.cloudpickle_fast\x94\x8c\x0f_class_setstate\x94\x93\x94h\r}\x94(\x8c\n__module__\x94\x8c\x0btest_client\x94\x8c\x0c__getstate__\x94h\x00\x8c\x0e_make_function\x94\x93\x94(h\x00\x8c\r_builtin_type\x94\x93\x94\x8c\x08CodeType\x94\x85\x94R\x94(K\x01K\x00K\x00K\x01K\x01JS\x00\x00\x01C\x04d\x01S\x00\x94NK\x01\x86\x94)\x8c\x04self\x94\x85\x94\x8cg/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/tests/test_client.py\x94h\x14M\xff\x12C\x02\x04\x01\x94))t\x94R\x94}\x94(\x8c\x0b__package__\x94\x8c\x00\x94\x8c\x08__name__\x94h\x13\x8c\x08__file__\x94\x8cg/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/tests/test_clie
nt.py\x94uNNNt\x94R\x94h\x0e\x8c\x12_function_setstate\x94\x93\x94h+}\x94}\x94(h'h\x14\x8c\x0c__qualname__\x94\x8c?test_robust_undeserializable_function.<locals>.Foo.__getstate__\x94\x8c\x0f__annotations__\x94}\x94\x8c\x0e__kwdefaults__\x94N\x8c\x0c__defaults__\x94Nh\x12h\x13\x8c\x07__doc__\x94N\x8c\x0b__closure__\x94N\x8c\x17_cloudpickle_submodules\x94]\x94\x8c\x0b__globals__\x94}\x94u\x86\x94\x86R0\x8c\x0c__setstate__\x94h\x16(h\x1b(K\x02K\x00K\x00K\x02K\x02JS\x00\x00\x01C\x08t\x00d\x01\x83\x01\x82\x01\x94N\x8c\x05hello\x94\x86\x94\x8c\x0bMyException\x94\x85\x94h\x1e\x8c\x05state\x94\x86\x94h h=M\x02\x13C\x02\x08\x01\x94))t\x94R\x94h$NNNt\x94R\x94h-hI}\x94}\x94(h'h=h0\x8c?test_robust_undeserializable_function.<locals>.Foo.__setstate__\x94h2}\x94h4Nh5Nh\x12h\x13h6Nh7Nh8]\x94h:}\x94hAh\x13hA\x93\x94su\x86\x94\x86R0\x8c\x08__call__\x94h\x16(h\x1b(K\x01K\x00K\x00K\x02K\x01JW\x00\x00\x01h\x1ch\x1d)h\x1e\x8c\x04args\x94\x86\x94h hRM\x05\x13h!))t\x94R\x94h$NNNt\x94R\x94h-hX}\x94}\x94(h'hR
 h0\x8c;test_robust_undeserializable_function.<locals>.Foo.__call__\x94h2}\x94h4Nh5Nh\x12h\x13h6Nh7Nh8]\x94h:}\x94u\x86\x94\x86R0h6Nu}\x94\x86\x94\x86R0)\x81\x94K\x01b."
-
-During handling of the above exception, another exception occurred:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 2161, in execute
-    function, args, kwargs = await self._maybe_deserialize_task(ts)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 2134, in _maybe_deserialize_task
-    function, args, kwargs = _deserialize(*ts.run_spec)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 2755, in _deserialize
-    function = loads_function(function)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 2746, in loads_function
-    result = pickle.loads(bytes_object)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/pickle.py", line 73, in loads
-    return pickle.loads(x)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/tests/test_client.py", line 4867, in __setstate__
-    raise MyException("hello")
-test_client.MyException: hello
-2022-08-26 14:03:46,016 - distributed.utils_perf - WARNING - full garbage collections took 75% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_fire_and_forget 2022-08-26 14:03:46,369 - distributed.utils_perf - WARNING - full garbage collections took 75% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_fire_and_forget_err 2022-08-26 14:03:46,413 - distributed.worker - WARNING - Compute Failed
-Key:       div-beaac0206246b34d3625d21194e03c13
-Function:  div
-args:      (1, 0)
-kwargs:    {}
-Exception: "ZeroDivisionError('division by zero')"
-
-2022-08-26 14:03:46,692 - distributed.utils_perf - WARNING - full garbage collections took 75% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_quiet_client_close PASSED
-distributed/tests/test_client.py::test_quiet_client_close_when_cluster_is_closed_before_client SKIPPED
-distributed/tests/test_client.py::test_close 2022-08-26 14:03:47,387 - distributed.utils_perf - WARNING - full garbage collections took 75% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_threadsafe 2022-08-26 14:03:48,186 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:03:48,188 - distributed.scheduler - INFO - State start
-2022-08-26 14:03:48,191 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:03:48,191 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:40151
-2022-08-26 14:03:48,191 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:03:48,199 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45139
-2022-08-26 14:03:48,199 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45139
-2022-08-26 14:03:48,199 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37689
-2022-08-26 14:03:48,199 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39103
-2022-08-26 14:03:48,199 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37689
-2022-08-26 14:03:48,199 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:40151
-2022-08-26 14:03:48,199 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42923
-2022-08-26 14:03:48,199 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:48,199 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:40151
-2022-08-26 14:03:48,199 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:03:48,199 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:48,199 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:03:48,199 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:03:48,199 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-p2hq9qir
-2022-08-26 14:03:48,199 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:03:48,199 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:48,199 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-qz9x6vbz
-2022-08-26 14:03:48,199 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:48,450 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45139', status: init, memory: 0, processing: 0>
-2022-08-26 14:03:48,695 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45139
-2022-08-26 14:03:48,695 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:48,695 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:40151
-2022-08-26 14:03:48,696 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:48,696 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37689', status: init, memory: 0, processing: 0>
-2022-08-26 14:03:48,696 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37689
-2022-08-26 14:03:48,696 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:48,696 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:48,697 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:40151
-2022-08-26 14:03:48,697 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:48,697 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:48,703 - distributed.scheduler - INFO - Receive client connection: Client-94932269-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:03:48,703 - distributed.core - INFO - Starting established connection
-PASSED2022-08-26 14:03:48,995 - distributed.scheduler - INFO - Remove client Client-94932269-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:03:48,995 - distributed.scheduler - INFO - Remove client Client-94932269-2582-11ed-a99d-00d861bc4509
-
-distributed/tests/test_client.py::test_threadsafe_get SKIPPED (need ...)
-distributed/tests/test_client.py::test_threadsafe_compute SKIPPED (n...)
-distributed/tests/test_client.py::test_identity 2022-08-26 14:03:49,239 - distributed.utils_perf - WARNING - full garbage collections took 75% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_get_client 2022-08-26 14:03:49,506 - distributed.utils_perf - WARNING - full garbage collections took 75% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_get_client_no_cluster PASSED
-distributed/tests/test_client.py::test_serialize_collections 2022-08-26 14:03:49,778 - distributed.utils_perf - WARNING - full garbage collections took 75% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_secede_simple 2022-08-26 14:03:50,029 - distributed.utils_perf - WARNING - full garbage collections took 75% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_secede_balances 2022-08-26 14:03:50,421 - distributed.utils_perf - WARNING - full garbage collections took 75% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_long_running_not_in_occupancy[True] 2022-08-26 14:03:50,475 - distributed.worker - WARNING - Compute Failed
-Key:       long_running-0b3bd8a55598dee80eded65266189e55
-Function:  long_running
-args:      (<distributed.lock.Lock object at 0x56404186cb00>, <distributed.event.Event object at 0x56403e37b0a0>)
-kwargs:    {}
-Exception: "RuntimeError('Exception in task')"
-
-2022-08-26 14:03:50,668 - distributed.utils_perf - WARNING - full garbage collections took 75% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_long_running_not_in_occupancy[False] 2022-08-26 14:03:50,926 - distributed.utils_perf - WARNING - full garbage collections took 75% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_long_running_removal_clean[True] 2022-08-26 14:03:51,175 - distributed.utils_perf - WARNING - full garbage collections took 75% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_long_running_removal_clean[False] 2022-08-26 14:03:51,415 - distributed.utils_perf - WARNING - full garbage collections took 75% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_sub_submit_priority 2022-08-26 14:03:52,158 - distributed.utils_perf - WARNING - full garbage collections took 76% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_get_client_sync 2022-08-26 14:03:52,959 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:03:52,962 - distributed.scheduler - INFO - State start
-2022-08-26 14:03:52,964 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:03:52,965 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:43293
-2022-08-26 14:03:52,965 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:03:52,972 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:44919
-2022-08-26 14:03:52,972 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:44919
-2022-08-26 14:03:52,972 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46369
-2022-08-26 14:03:52,972 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43293
-2022-08-26 14:03:52,972 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34479
-2022-08-26 14:03:52,973 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:52,973 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:03:52,973 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34479
-2022-08-26 14:03:52,973 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:03:52,973 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34289
-2022-08-26 14:03:52,973 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-lphrwsnp
-2022-08-26 14:03:52,973 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43293
-2022-08-26 14:03:52,973 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:52,973 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:52,973 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:03:52,973 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:03:52,973 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-nvlon_1e
-2022-08-26 14:03:52,973 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:53,226 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:44919', status: init, memory: 0, processing: 0>
-2022-08-26 14:03:53,473 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:44919
-2022-08-26 14:03:53,474 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:53,474 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43293
-2022-08-26 14:03:53,474 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:53,474 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:34479', status: init, memory: 0, processing: 0>
-2022-08-26 14:03:53,475 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:34479
-2022-08-26 14:03:53,475 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:53,475 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:53,475 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43293
-2022-08-26 14:03:53,475 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:53,476 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:53,480 - distributed.scheduler - INFO - Receive client connection: Client-976c3451-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:03:53,481 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:53,484 - distributed.worker - INFO - Run out-of-band function 'lambda'
-2022-08-26 14:03:53,484 - distributed.worker - INFO - Run out-of-band function 'lambda'
-2022-08-26 14:03:53,487 - distributed.worker - INFO - Run out-of-band function 'lambda'
-2022-08-26 14:03:53,487 - distributed.worker - INFO - Run out-of-band function 'lambda'
-PASSED2022-08-26 14:03:53,490 - distributed.scheduler - INFO - Receive client connection: Client-worker-976d9fb9-2582-11ed-87f5-00d861bc4509
-2022-08-26 14:03:53,490 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:53,491 - distributed.scheduler - INFO - Receive client connection: Client-worker-976da23b-2582-11ed-87f6-00d861bc4509
-2022-08-26 14:03:53,491 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:53,492 - distributed.scheduler - INFO - Remove client Client-976c3451-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:03:53,492 - distributed.scheduler - INFO - Remove client Client-976c3451-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:03:53,492 - distributed.scheduler - INFO - Close client connection: Client-976c3451-2582-11ed-a99d-00d861bc4509
-
-distributed/tests/test_client.py::test_serialize_collections_of_futures 2022-08-26 14:03:53,763 - distributed.utils_perf - WARNING - full garbage collections took 76% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_serialize_collections_of_futures_sync 2022-08-26 14:03:54,569 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:03:54,571 - distributed.scheduler - INFO - State start
-2022-08-26 14:03:54,574 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:03:54,574 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:44859
-2022-08-26 14:03:54,574 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:03:54,582 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:46089
-2022-08-26 14:03:54,582 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38531
-2022-08-26 14:03:54,582 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38531
-2022-08-26 14:03:54,582 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:46089
-2022-08-26 14:03:54,582 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41873
-2022-08-26 14:03:54,582 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39547
-2022-08-26 14:03:54,582 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44859
-2022-08-26 14:03:54,582 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44859
-2022-08-26 14:03:54,582 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:54,582 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:54,582 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:03:54,582 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:03:54,582 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:03:54,582 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:03:54,582 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-4l79xyax
-2022-08-26 14:03:54,582 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-vys8h2lx
-2022-08-26 14:03:54,582 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:54,582 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:54,835 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:46089', status: init, memory: 0, processing: 0>
-2022-08-26 14:03:55,084 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:46089
-2022-08-26 14:03:55,084 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:55,084 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44859
-2022-08-26 14:03:55,084 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:55,085 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38531', status: init, memory: 0, processing: 0>
-2022-08-26 14:03:55,085 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:55,085 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38531
-2022-08-26 14:03:55,085 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:55,085 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44859
-2022-08-26 14:03:55,085 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:55,086 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:55,091 - distributed.scheduler - INFO - Receive client connection: Client-9861f240-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:03:55,091 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:55,200 - distributed.scheduler - INFO - Receive client connection: Client-worker-98720695-2582-11ed-8814-00d861bc4509
-2022-08-26 14:03:55,200 - distributed.core - INFO - Starting established connection
-PASSED2022-08-26 14:03:55,326 - distributed.scheduler - INFO - Remove client Client-9861f240-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:03:55,326 - distributed.scheduler - INFO - Remove client Client-9861f240-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:03:55,326 - distributed.scheduler - INFO - Close client connection: Client-9861f240-2582-11ed-a99d-00d861bc4509
-
-distributed/tests/test_client.py::test_dynamic_workloads_sync 2022-08-26 14:03:56,143 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:03:56,145 - distributed.scheduler - INFO - State start
-2022-08-26 14:03:56,148 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:03:56,148 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:38575
-2022-08-26 14:03:56,148 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:03:56,150 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-vys8h2lx', purging
-2022-08-26 14:03:56,150 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-4l79xyax', purging
-2022-08-26 14:03:56,156 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39631
-2022-08-26 14:03:56,156 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39631
-2022-08-26 14:03:56,156 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34633
-2022-08-26 14:03:56,156 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:38575
-2022-08-26 14:03:56,156 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:56,156 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:03:56,156 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:03:56,156 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-0_gt5_lu
-2022-08-26 14:03:56,156 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34271
-2022-08-26 14:03:56,156 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:56,156 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34271
-2022-08-26 14:03:56,156 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45533
-2022-08-26 14:03:56,156 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:38575
-2022-08-26 14:03:56,156 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:56,156 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:03:56,156 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:03:56,156 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ds675rwd
-2022-08-26 14:03:56,156 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:56,409 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:34271', status: init, memory: 0, processing: 0>
-2022-08-26 14:03:56,658 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:34271
-2022-08-26 14:03:56,658 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:56,658 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:38575
-2022-08-26 14:03:56,658 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:56,658 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39631', status: init, memory: 0, processing: 0>
-2022-08-26 14:03:56,659 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:56,659 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39631
-2022-08-26 14:03:56,659 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:56,659 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:38575
-2022-08-26 14:03:56,659 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:56,660 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:56,665 - distributed.scheduler - INFO - Receive client connection: Client-9952236b-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:03:56,665 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:56,730 - distributed.scheduler - INFO - Receive client connection: Client-worker-995be0ef-2582-11ed-8831-00d861bc4509
-2022-08-26 14:03:56,731 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:56,867 - distributed.scheduler - INFO - Receive client connection: Client-worker-9970eb7c-2582-11ed-8832-00d861bc4509
-2022-08-26 14:03:56,867 - distributed.core - INFO - Starting established connection
-PASSED2022-08-26 14:03:57,180 - distributed.scheduler - INFO - Remove client Client-9952236b-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:03:57,180 - distributed.scheduler - INFO - Remove client Client-9952236b-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:03:57,181 - distributed.scheduler - INFO - Close client connection: Client-9952236b-2582-11ed-a99d-00d861bc4509
-
-distributed/tests/test_client.py::test_dynamic_workloads_sync_random SKIPPED
-distributed/tests/test_client.py::test_bytes_keys 2022-08-26 14:03:57,439 - distributed.utils_perf - WARNING - full garbage collections took 77% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_unicode_ascii_keys 2022-08-26 14:03:57,684 - distributed.utils_perf - WARNING - full garbage collections took 77% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_unicode_keys 2022-08-26 14:03:57,940 - distributed.utils_perf - WARNING - full garbage collections took 77% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_use_synchronous_client_in_async_context 2022-08-26 14:03:58,750 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:03:58,752 - distributed.scheduler - INFO - State start
-2022-08-26 14:03:58,755 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:03:58,755 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:36003
-2022-08-26 14:03:58,755 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:03:58,763 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34663
-2022-08-26 14:03:58,763 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34663
-2022-08-26 14:03:58,763 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45719
-2022-08-26 14:03:58,763 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36003
-2022-08-26 14:03:58,763 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:58,763 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:03:58,763 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33891
-2022-08-26 14:03:58,763 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:03:58,763 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33891
-2022-08-26 14:03:58,763 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-0l7ffrm1
-2022-08-26 14:03:58,763 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41611
-2022-08-26 14:03:58,763 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:58,763 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36003
-2022-08-26 14:03:58,763 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:58,763 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:03:58,763 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:03:58,763 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-h5x9tkks
-2022-08-26 14:03:58,763 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:59,018 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:34663', status: init, memory: 0, processing: 0>
-2022-08-26 14:03:59,266 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:34663
-2022-08-26 14:03:59,266 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:59,266 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36003
-2022-08-26 14:03:59,266 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:59,267 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33891', status: init, memory: 0, processing: 0>
-2022-08-26 14:03:59,267 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:59,267 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33891
-2022-08-26 14:03:59,267 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:59,267 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36003
-2022-08-26 14:03:59,267 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:59,268 - distributed.core - INFO - Starting established connection
-2022-08-26 14:03:59,273 - distributed.scheduler - INFO - Receive client connection: Client-9ae00fa7-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:03:59,273 - distributed.core - INFO - Starting established connection
-PASSED2022-08-26 14:03:59,296 - distributed.scheduler - INFO - Remove client Client-9ae00fa7-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:03:59,296 - distributed.scheduler - INFO - Remove client Client-9ae00fa7-2582-11ed-a99d-00d861bc4509
-
-distributed/tests/test_client.py::test_quiet_quit_when_cluster_leaves 2022-08-26 14:03:59,928 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-0l7ffrm1', purging
-2022-08-26 14:03:59,928 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-h5x9tkks', purging
-2022-08-26 14:03:59,934 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:41599
-2022-08-26 14:03:59,934 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:41599
-2022-08-26 14:03:59,934 - distributed.worker - INFO -           Worker name:                          3
-2022-08-26 14:03:59,934 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45321
-2022-08-26 14:03:59,934 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35267
-2022-08-26 14:03:59,934 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:59,934 - distributed.worker - INFO -               Threads:                          3
-2022-08-26 14:03:59,934 - distributed.worker - INFO -                Memory:                  15.71 GiB
-2022-08-26 14:03:59,934 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-8m4x5jhj
-2022-08-26 14:03:59,934 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:59,934 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39315
-2022-08-26 14:03:59,935 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39315
-2022-08-26 14:03:59,935 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:03:59,935 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35891
-2022-08-26 14:03:59,935 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35267
-2022-08-26 14:03:59,935 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:59,935 - distributed.worker - INFO -               Threads:                          3
-2022-08-26 14:03:59,935 - distributed.worker - INFO -                Memory:                  15.71 GiB
-2022-08-26 14:03:59,935 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-_skij9ce
-2022-08-26 14:03:59,935 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:59,945 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33881
-2022-08-26 14:03:59,946 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:41141
-2022-08-26 14:03:59,946 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33881
-2022-08-26 14:03:59,946 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:41141
-2022-08-26 14:03:59,946 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:03:59,946 - distributed.worker - INFO -           Worker name:                          2
-2022-08-26 14:03:59,946 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46253
-2022-08-26 14:03:59,946 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42993
-2022-08-26 14:03:59,946 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35267
-2022-08-26 14:03:59,946 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35267
-2022-08-26 14:03:59,946 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:59,946 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:59,946 - distributed.worker - INFO -               Threads:                          3
-2022-08-26 14:03:59,946 - distributed.worker - INFO -               Threads:                          3
-2022-08-26 14:03:59,946 - distributed.worker - INFO -                Memory:                  15.71 GiB
-2022-08-26 14:03:59,946 - distributed.worker - INFO -                Memory:                  15.71 GiB
-2022-08-26 14:03:59,946 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-pyo0l8c5
-2022-08-26 14:03:59,946 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-nswjswha
-2022-08-26 14:03:59,946 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:03:59,946 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:04:00,194 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35267
-2022-08-26 14:04:00,195 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:04:00,195 - distributed.core - INFO - Starting established connection
-2022-08-26 14:04:00,196 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35267
-2022-08-26 14:04:00,196 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:04:00,197 - distributed.core - INFO - Starting established connection
-2022-08-26 14:04:00,206 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35267
-2022-08-26 14:04:00,207 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:04:00,207 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35267
-2022-08-26 14:04:00,207 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:04:00,207 - distributed.core - INFO - Starting established connection
-2022-08-26 14:04:00,208 - distributed.core - INFO - Starting established connection
-2022-08-26 14:04:00,270 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:33881
-2022-08-26 14:04:00,270 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:39315
-2022-08-26 14:04:00,270 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:41141
-2022-08-26 14:04:00,271 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-c4df0e83-a884-4f63-9d27-6c00e1453f61 Address tcp://127.0.0.1:33881 Status: Status.closing
-2022-08-26 14:04:00,271 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:41599
-2022-08-26 14:04:00,271 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-6b4633d3-e32b-41e0-9b4e-6bf02d022b5c Address tcp://127.0.0.1:41141 Status: Status.closing
-2022-08-26 14:04:00,271 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-166d7043-66dc-47fd-b324-9a6c5ea793e3 Address tcp://127.0.0.1:39315 Status: Status.closing
-2022-08-26 14:04:00,272 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-d2bc20a8-e79c-44c9-aef7-1060d1b95a34 Address tcp://127.0.0.1:41599 Status: Status.closing
-2022-08-26 14:04:00,605 - distributed.client - ERROR - 
-ConnectionRefusedError: [Errno 111] Connection refused
-
-The above exception was the direct cause of the following exception:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/core.py", line 291, in connect
-    comm = await asyncio.wait_for(
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/tasks.py", line 445, in wait_for
-    return fut.result()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 496, in connect
-    convert_stream_closed_error(self, e)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 142, in convert_stream_closed_error
-    raise CommClosedError(f"in {obj}: {exc.__class__.__name__}: {exc}") from exc
-distributed.comm.core.CommClosedError: in <distributed.comm.tcp.TCPConnector object at 0x7f15300c2d80>: ConnectionRefusedError: [Errno 111] Connection refused
-
-During handling of the above exception, another exception occurred:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/utils.py", line 799, in wrapper
-    return await func(*args, **kwargs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/client.py", line 1246, in _reconnect
-    await self._ensure_connected(timeout=timeout)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/client.py", line 1276, in _ensure_connected
-    comm = await connect(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/core.py", line 315, in connect
-    await asyncio.sleep(backoff)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/tasks.py", line 605, in sleep
-    return await future
-asyncio.exceptions.CancelledError
-PASSED
-distributed/tests/test_client.py::test_call_stack_future 2022-08-26 14:04:01,343 - distributed.utils_perf - WARNING - full garbage collections took 77% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_call_stack_all 2022-08-26 14:04:02,376 - distributed.utils_perf - WARNING - full garbage collections took 77% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_call_stack_collections 2022-08-26 14:04:04,117 - distributed.utils_perf - WARNING - full garbage collections took 77% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_call_stack_collections_all 2022-08-26 14:04:05,858 - distributed.utils_perf - WARNING - full garbage collections took 78% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_profile 2022-08-26 14:04:06,625 - distributed.utils_perf - WARNING - full garbage collections took 78% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_profile_disabled 2022-08-26 14:04:07,402 - distributed.utils_perf - WARNING - full garbage collections took 78% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_profile_keys 2022-08-26 14:04:08,683 - distributed.utils_perf - WARNING - full garbage collections took 78% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_client_with_name 2022-08-26 14:04:08,917 - distributed.utils_perf - WARNING - full garbage collections took 78% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_future_defaults_to_default_client 2022-08-26 14:04:09,163 - distributed.utils_perf - WARNING - full garbage collections took 79% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_future_auto_inform 2022-08-26 14:04:09,435 - distributed.utils_perf - WARNING - full garbage collections took 79% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_client_async_before_loop_starts FAILED
-distributed/tests/test_client.py::test_nested_compute SKIPPED (need ...)
-distributed/tests/test_client.py::test_task_metadata 2022-08-26 14:04:09,704 - distributed.core - ERROR - Exception while handling op get_metadata
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 768, in _handle_comm
-    result = handler(**msg)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/scheduler.py", line 6567, in get_metadata
-    return metadata[keys[-1]]
-KeyError: 'inc-03d935909bba38f9a49655e867cbf56a'
-2022-08-26 14:04:09,903 - distributed.utils_perf - WARNING - full garbage collections took 76% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_logs 2022-08-26 14:04:10,485 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:44263
-2022-08-26 14:04:10,485 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:44263
-2022-08-26 14:04:10,485 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:04:10,486 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46063
-2022-08-26 14:04:10,486 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33767
-2022-08-26 14:04:10,486 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:04:10,486 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:04:10,486 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:04:10,486 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-7eddvk9d
-2022-08-26 14:04:10,486 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:04:10,493 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36593
-2022-08-26 14:04:10,493 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36593
-2022-08-26 14:04:10,493 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:04:10,493 - distributed.worker - INFO -          dashboard at:            127.0.0.1:44785
-2022-08-26 14:04:10,493 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33767
-2022-08-26 14:04:10,493 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:04:10,493 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:04:10,493 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:04:10,493 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-i3szzm2g
-2022-08-26 14:04:10,493 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:04:10,730 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33767
-2022-08-26 14:04:10,730 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:04:10,731 - distributed.core - INFO - Starting established connection
-2022-08-26 14:04:10,744 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33767
-2022-08-26 14:04:10,744 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:04:10,744 - distributed.core - INFO - Starting established connection
-2022-08-26 14:04:11,040 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36593
-2022-08-26 14:04:11,040 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:44263
-2022-08-26 14:04:11,040 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-76c039c9-050a-468f-ad58-c60db019b788 Address tcp://127.0.0.1:36593 Status: Status.closing
-2022-08-26 14:04:11,041 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-72e9626f-1668-4e9f-8246-97628a503919 Address tcp://127.0.0.1:44263 Status: Status.closing
-2022-08-26 14:04:11,399 - distributed.utils_perf - WARNING - full garbage collections took 76% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_avoid_delayed_finalize 2022-08-26 14:04:11,644 - distributed.utils_perf - WARNING - full garbage collections took 77% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_config_scheduler_address 2022-08-26 14:04:11,880 - distributed.utils_perf - WARNING - full garbage collections took 77% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_warn_when_submitting_large_values 2022-08-26 14:04:12,241 - distributed.utils_perf - WARNING - full garbage collections took 75% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_unhashable_function 2022-08-26 14:04:12,489 - distributed.utils_perf - WARNING - full garbage collections took 76% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_client_name 2022-08-26 14:04:12,723 - distributed.utils_perf - WARNING - full garbage collections took 76% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_client_doesnt_close_given_loop 2022-08-26 14:04:13,528 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:04:13,530 - distributed.scheduler - INFO - State start
-2022-08-26 14:04:13,533 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:04:13,533 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:37567
-2022-08-26 14:04:13,533 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:04:13,541 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33759
-2022-08-26 14:04:13,541 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38763
-2022-08-26 14:04:13,541 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33759
-2022-08-26 14:04:13,541 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38763
-2022-08-26 14:04:13,541 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34981
-2022-08-26 14:04:13,541 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37567
-2022-08-26 14:04:13,541 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43339
-2022-08-26 14:04:13,541 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:04:13,541 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37567
-2022-08-26 14:04:13,541 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:04:13,541 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:04:13,541 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:04:13,541 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:04:13,542 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-7w9_wblm
-2022-08-26 14:04:13,542 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:04:13,542 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-0t60svcx
-2022-08-26 14:04:13,542 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:04:13,542 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:04:13,795 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33759', status: init, memory: 0, processing: 0>
-2022-08-26 14:04:14,042 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33759
-2022-08-26 14:04:14,042 - distributed.core - INFO - Starting established connection
-2022-08-26 14:04:14,042 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37567
-2022-08-26 14:04:14,042 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:04:14,042 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38763', status: init, memory: 0, processing: 0>
-2022-08-26 14:04:14,043 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38763
-2022-08-26 14:04:14,043 - distributed.core - INFO - Starting established connection
-2022-08-26 14:04:14,043 - distributed.core - INFO - Starting established connection
-2022-08-26 14:04:14,043 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37567
-2022-08-26 14:04:14,043 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:04:14,044 - distributed.core - INFO - Starting established connection
-2022-08-26 14:04:14,049 - distributed.scheduler - INFO - Receive client connection: Client-a3aebbcc-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:04:14,050 - distributed.core - INFO - Starting established connection
-2022-08-26 14:04:14,072 - distributed.scheduler - INFO - Remove client Client-a3aebbcc-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:04:14,072 - distributed.scheduler - INFO - Remove client Client-a3aebbcc-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:04:14,072 - distributed.scheduler - INFO - Close client connection: Client-a3aebbcc-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:04:14,075 - distributed.scheduler - INFO - Receive client connection: Client-a3b2b531-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:04:14,075 - distributed.core - INFO - Starting established connection
-2022-08-26 14:04:14,097 - distributed.scheduler - INFO - Remove client Client-a3b2b531-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:04:14,097 - distributed.scheduler - INFO - Remove client Client-a3b2b531-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:04:14,097 - distributed.scheduler - INFO - Close client connection: Client-a3b2b531-2582-11ed-a99d-00d861bc4509
-PASSED
-distributed/tests/test_client.py::test_quiet_scheduler_loss 2022-08-26 14:04:14,219 - distributed.client - ERROR - 
-ConnectionRefusedError: [Errno 111] Connection refused
-
-The above exception was the direct cause of the following exception:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/core.py", line 291, in connect
-    comm = await asyncio.wait_for(
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/tasks.py", line 445, in wait_for
-    return fut.result()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 496, in connect
-    convert_stream_closed_error(self, e)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 142, in convert_stream_closed_error
-    raise CommClosedError(f"in {obj}: {exc.__class__.__name__}: {exc}") from exc
-distributed.comm.core.CommClosedError: in <distributed.comm.tcp.TCPConnector object at 0x56403d8f5790>: ConnectionRefusedError: [Errno 111] Connection refused
-
-During handling of the above exception, another exception occurred:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/utils.py", line 799, in wrapper
-    return await func(*args, **kwargs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/client.py", line 1246, in _reconnect
-    await self._ensure_connected(timeout=timeout)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/client.py", line 1276, in _ensure_connected
-    comm = await connect(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/core.py", line 315, in connect
-    await asyncio.sleep(backoff)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/tasks.py", line 605, in sleep
-    return await future
-asyncio.exceptions.CancelledError
-2022-08-26 14:04:14,408 - distributed.utils_perf - WARNING - full garbage collections took 76% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_dashboard_link 2022-08-26 14:04:15,638 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:04:15,678 - distributed.scheduler - INFO - State start
-2022-08-26 14:04:15,681 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:04:15,681 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:43155
-2022-08-26 14:04:15,681 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:12355
-2022-08-26 14:04:15,683 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-7w9_wblm', purging
-2022-08-26 14:04:15,684 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-0t60svcx', purging
-2022-08-26 14:04:15,690 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:41109
-2022-08-26 14:04:15,690 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:41109
-2022-08-26 14:04:15,690 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42485
-2022-08-26 14:04:15,690 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43155
-2022-08-26 14:04:15,690 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:04:15,690 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:04:15,690 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:04:15,690 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-qem8vrei
-2022-08-26 14:04:15,690 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45227
-2022-08-26 14:04:15,690 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:04:15,690 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45227
-2022-08-26 14:04:15,690 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33087
-2022-08-26 14:04:15,690 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43155
-2022-08-26 14:04:15,690 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:04:15,690 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:04:15,690 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:04:15,690 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-nhio2lhi
-2022-08-26 14:04:15,690 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:04:15,940 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45227', status: init, memory: 0, processing: 0>
-2022-08-26 14:04:15,968 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45227
-2022-08-26 14:04:15,968 - distributed.core - INFO - Starting established connection
-2022-08-26 14:04:15,968 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43155
-2022-08-26 14:04:15,969 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:04:15,969 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:41109', status: init, memory: 0, processing: 0>
-2022-08-26 14:04:15,969 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:41109
-2022-08-26 14:04:15,969 - distributed.core - INFO - Starting established connection
-2022-08-26 14:04:15,969 - distributed.core - INFO - Starting established connection
-2022-08-26 14:04:15,969 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43155
-2022-08-26 14:04:15,970 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:04:15,970 - distributed.core - INFO - Starting established connection
-2022-08-26 14:04:15,975 - distributed.scheduler - INFO - Receive client connection: Client-a4d49cfc-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:04:15,975 - distributed.core - INFO - Starting established connection
-2022-08-26 14:04:15,997 - distributed.scheduler - INFO - Remove client Client-a4d49cfc-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:04:15,997 - distributed.scheduler - INFO - Remove client Client-a4d49cfc-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:04:15,997 - distributed.scheduler - INFO - Close client connection: Client-a4d49cfc-2582-11ed-a99d-00d861bc4509
-PASSED
-distributed/tests/test_client.py::test_dashboard_link_inproc PASSED
-distributed/tests/test_client.py::test_client_timeout_2 PASSED
-distributed/tests/test_client.py::test_client_active_bad_port PASSED
-distributed/tests/test_client.py::test_turn_off_pickle[True] 2022-08-26 14:04:16,147 - distributed.protocol.core - CRITICAL - Failed to Serialize
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/core.py", line 109, in dumps
-    frames[0] = msgpack.dumps(msg, default=_encode_default, use_bin_type=True)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/msgpack/__init__.py", line 38, in packb
-    return Packer(**kwargs).pack(o)
-  File "msgpack/_packer.pyx", line 294, in msgpack._cmsgpack.Packer.pack
-  File "msgpack/_packer.pyx", line 300, in msgpack._cmsgpack.Packer.pack
-  File "msgpack/_packer.pyx", line 297, in msgpack._cmsgpack.Packer.pack
-  File "msgpack/_packer.pyx", line 231, in msgpack._cmsgpack.Packer._pack
-  File "msgpack/_packer.pyx", line 231, in msgpack._cmsgpack.Packer._pack
-  File "msgpack/_packer.pyx", line 285, in msgpack._cmsgpack.Packer._pack
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/core.py", line 100, in _encode_default
-    frames.extend(create_serialized_sub_frames(obj))
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/core.py", line 60, in create_serialized_sub_frames
-    sub_header, sub_frames = serialize_and_split(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/serialize.py", line 444, in serialize_and_split
-    header, frames = serialize(x, serializers, on_error, context)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/serialize.py", line 266, in serialize
-    return serialize(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/serialize.py", line 366, in serialize
-    raise TypeError(msg, str(x)[:10000])
-TypeError: ('Could not serialize object of type function', '<function inc at 0x5640384ccd40>')
-2022-08-26 14:04:16,148 - distributed.comm.utils - ERROR - ('Could not serialize object of type function', '<function inc at 0x5640384ccd40>')
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/utils.py", line 55, in _to_frames
-    return list(protocol.dumps(msg, **kwargs))
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/core.py", line 109, in dumps
-    frames[0] = msgpack.dumps(msg, default=_encode_default, use_bin_type=True)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/msgpack/__init__.py", line 38, in packb
-    return Packer(**kwargs).pack(o)
-  File "msgpack/_packer.pyx", line 294, in msgpack._cmsgpack.Packer.pack
-  File "msgpack/_packer.pyx", line 300, in msgpack._cmsgpack.Packer.pack
-  File "msgpack/_packer.pyx", line 297, in msgpack._cmsgpack.Packer.pack
-  File "msgpack/_packer.pyx", line 231, in msgpack._cmsgpack.Packer._pack
-  File "msgpack/_packer.pyx", line 231, in msgpack._cmsgpack.Packer._pack
-  File "msgpack/_packer.pyx", line 285, in msgpack._cmsgpack.Packer._pack
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/core.py", line 100, in _encode_default
-    frames.extend(create_serialized_sub_frames(obj))
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/core.py", line 60, in create_serialized_sub_frames
-    sub_header, sub_frames = serialize_and_split(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/serialize.py", line 444, in serialize_and_split
-    header, frames = serialize(x, serializers, on_error, context)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/serialize.py", line 266, in serialize
-    return serialize(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/serialize.py", line 366, in serialize
-    raise TypeError(msg, str(x)[:10000])
-TypeError: ('Could not serialize object of type function', '<function inc at 0x5640384ccd40>')
-2022-08-26 14:04:16,157 - distributed.protocol.core - CRITICAL - Failed to deserialize
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/core.py", line 158, in loads
-    return msgpack.loads(
-  File "msgpack/_unpacker.pyx", line 194, in msgpack._cmsgpack.unpackb
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/core.py", line 138, in _decode_default
-    return merge_and_deserialize(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/serialize.py", line 497, in merge_and_deserialize
-    return deserialize(header, merged_frames, deserializers=deserializers)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/serialize.py", line 421, in deserialize
-    raise TypeError(
-TypeError: Data serialized with error but only able to deserialize data with ['dask', 'msgpack']
-2022-08-26 14:04:16,165 - distributed.protocol.core - CRITICAL - Failed to deserialize
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/core.py", line 158, in loads
-    return msgpack.loads(
-  File "msgpack/_unpacker.pyx", line 194, in msgpack._cmsgpack.unpackb
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/core.py", line 138, in _decode_default
-    return merge_and_deserialize(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/serialize.py", line 497, in merge_and_deserialize
-    return deserialize(header, merged_frames, deserializers=deserializers)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/serialize.py", line 421, in deserialize
-    raise TypeError(
-TypeError: Data serialized with error but only able to deserialize data with ['dask', 'msgpack']
-2022-08-26 14:04:16,166 - distributed.protocol.core - CRITICAL - Failed to deserialize
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/core.py", line 158, in loads
-    return msgpack.loads(
-  File "msgpack/_unpacker.pyx", line 194, in msgpack._cmsgpack.unpackb
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/core.py", line 138, in _decode_default
-    return merge_and_deserialize(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/serialize.py", line 497, in merge_and_deserialize
-    return deserialize(header, merged_frames, deserializers=deserializers)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/serialize.py", line 421, in deserialize
-    raise TypeError(
-TypeError: Data serialized with error but only able to deserialize data with ['dask', 'msgpack']
-2022-08-26 14:04:16,168 - distributed.worker - ERROR - failed during get data with tcp://127.0.0.1:38363 -> None
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 225, in read
-    frames_nbytes = await stream.read_bytes(fmt_size)
-tornado.iostream.StreamClosedError: Stream is closed
-
-The above exception was the direct cause of the following exception:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 1696, in get_data
-    response = await comm.read(deserializers=serializers)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 241, in read
-    convert_stream_closed_error(self, e)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 144, in convert_stream_closed_error
-    raise CommClosedError(f"in {obj}: {exc}") from exc
-distributed.comm.core.CommClosedError: in <TCP (closed)  local=tcp://127.0.0.1:38363 remote=tcp://127.0.0.1:43680>: Stream is closed
-2022-08-26 14:04:16,360 - distributed.utils_perf - WARNING - full garbage collections took 75% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_turn_off_pickle[False] 2022-08-26 14:04:16,420 - distributed.protocol.core - CRITICAL - Failed to Serialize
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/core.py", line 109, in dumps
-    frames[0] = msgpack.dumps(msg, default=_encode_default, use_bin_type=True)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/msgpack/__init__.py", line 38, in packb
-    return Packer(**kwargs).pack(o)
-  File "msgpack/_packer.pyx", line 294, in msgpack._cmsgpack.Packer.pack
-  File "msgpack/_packer.pyx", line 300, in msgpack._cmsgpack.Packer.pack
-  File "msgpack/_packer.pyx", line 297, in msgpack._cmsgpack.Packer.pack
-  File "msgpack/_packer.pyx", line 231, in msgpack._cmsgpack.Packer._pack
-  File "msgpack/_packer.pyx", line 231, in msgpack._cmsgpack.Packer._pack
-  File "msgpack/_packer.pyx", line 285, in msgpack._cmsgpack.Packer._pack
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/core.py", line 100, in _encode_default
-    frames.extend(create_serialized_sub_frames(obj))
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/core.py", line 60, in create_serialized_sub_frames
-    sub_header, sub_frames = serialize_and_split(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/serialize.py", line 444, in serialize_and_split
-    header, frames = serialize(x, serializers, on_error, context)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/serialize.py", line 266, in serialize
-    return serialize(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/serialize.py", line 366, in serialize
-    raise TypeError(msg, str(x)[:10000])
-TypeError: ('Could not serialize object of type function', '<function inc at 0x5640384ccd40>')
-2022-08-26 14:04:16,420 - distributed.comm.utils - ERROR - ('Could not serialize object of type function', '<function inc at 0x5640384ccd40>')
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/utils.py", line 55, in _to_frames
-    return list(protocol.dumps(msg, **kwargs))
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/core.py", line 109, in dumps
-    frames[0] = msgpack.dumps(msg, default=_encode_default, use_bin_type=True)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/msgpack/__init__.py", line 38, in packb
-    return Packer(**kwargs).pack(o)
-  File "msgpack/_packer.pyx", line 294, in msgpack._cmsgpack.Packer.pack
-  File "msgpack/_packer.pyx", line 300, in msgpack._cmsgpack.Packer.pack
-  File "msgpack/_packer.pyx", line 297, in msgpack._cmsgpack.Packer.pack
-  File "msgpack/_packer.pyx", line 231, in msgpack._cmsgpack.Packer._pack
-  File "msgpack/_packer.pyx", line 231, in msgpack._cmsgpack.Packer._pack
-  File "msgpack/_packer.pyx", line 285, in msgpack._cmsgpack.Packer._pack
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/core.py", line 100, in _encode_default
-    frames.extend(create_serialized_sub_frames(obj))
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/core.py", line 60, in create_serialized_sub_frames
-    sub_header, sub_frames = serialize_and_split(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/serialize.py", line 444, in serialize_and_split
-    header, frames = serialize(x, serializers, on_error, context)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/serialize.py", line 266, in serialize
-    return serialize(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/serialize.py", line 366, in serialize
-    raise TypeError(msg, str(x)[:10000])
-TypeError: ('Could not serialize object of type function', '<function inc at 0x5640384ccd40>')
-2022-08-26 14:04:16,431 - distributed.protocol.core - CRITICAL - Failed to deserialize
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/core.py", line 158, in loads
-    return msgpack.loads(
-  File "msgpack/_unpacker.pyx", line 194, in msgpack._cmsgpack.unpackb
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/core.py", line 138, in _decode_default
-    return merge_and_deserialize(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/serialize.py", line 497, in merge_and_deserialize
-    return deserialize(header, merged_frames, deserializers=deserializers)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/serialize.py", line 421, in deserialize
-    raise TypeError(
-TypeError: Data serialized with error but only able to deserialize data with ['dask', 'msgpack']
-2022-08-26 14:04:16,438 - distributed.protocol.core - CRITICAL - Failed to deserialize
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/core.py", line 158, in loads
-    return msgpack.loads(
-  File "msgpack/_unpacker.pyx", line 194, in msgpack._cmsgpack.unpackb
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/core.py", line 138, in _decode_default
-    return merge_and_deserialize(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/serialize.py", line 497, in merge_and_deserialize
-    return deserialize(header, merged_frames, deserializers=deserializers)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/serialize.py", line 421, in deserialize
-    raise TypeError(
-TypeError: Data serialized with error but only able to deserialize data with ['dask', 'msgpack']
-2022-08-26 14:04:16,439 - distributed.protocol.core - CRITICAL - Failed to deserialize
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/core.py", line 158, in loads
-    return msgpack.loads(
-  File "msgpack/_unpacker.pyx", line 194, in msgpack._cmsgpack.unpackb
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/core.py", line 138, in _decode_default
-    return merge_and_deserialize(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/serialize.py", line 497, in merge_and_deserialize
-    return deserialize(header, merged_frames, deserializers=deserializers)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/serialize.py", line 421, in deserialize
-    raise TypeError(
-TypeError: Data serialized with error but only able to deserialize data with ['dask', 'msgpack']
-2022-08-26 14:04:16,633 - distributed.utils_perf - WARNING - full garbage collections took 75% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_de_serialization 2022-08-26 14:04:16,669 - distributed.protocol.core - CRITICAL - Failed to deserialize
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/core.py", line 158, in loads
-    return msgpack.loads(
-  File "msgpack/_unpacker.pyx", line 194, in msgpack._cmsgpack.unpackb
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/core.py", line 138, in _decode_default
-    return merge_and_deserialize(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/serialize.py", line 497, in merge_and_deserialize
-    return deserialize(header, merged_frames, deserializers=deserializers)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/serialize.py", line 421, in deserialize
-    raise TypeError(
-TypeError: Data serialized with error but only able to deserialize data with ['msgpack']
-2022-08-26 14:04:16,677 - distributed.worker - ERROR - failed during get data with tcp://127.0.0.1:41817 -> None
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 225, in read
-    frames_nbytes = await stream.read_bytes(fmt_size)
-tornado.iostream.StreamClosedError: Stream is closed
-
-The above exception was the direct cause of the following exception:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 1696, in get_data
-    response = await comm.read(deserializers=serializers)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 241, in read
-    convert_stream_closed_error(self, e)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 144, in convert_stream_closed_error
-    raise CommClosedError(f"in {obj}: {exc}") from exc
-distributed.comm.core.CommClosedError: in <TCP (closed)  local=tcp://127.0.0.1:41817 remote=tcp://127.0.0.1:43664>: Stream is closed
-2022-08-26 14:04:16,870 - distributed.utils_perf - WARNING - full garbage collections took 77% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_de_serialization_none 2022-08-26 14:04:16,906 - distributed.protocol.core - CRITICAL - Failed to deserialize
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/core.py", line 158, in loads
-    return msgpack.loads(
-  File "msgpack/_unpacker.pyx", line 194, in msgpack._cmsgpack.unpackb
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/core.py", line 138, in _decode_default
-    return merge_and_deserialize(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/serialize.py", line 497, in merge_and_deserialize
-    return deserialize(header, merged_frames, deserializers=deserializers)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/serialize.py", line 421, in deserialize
-    raise TypeError(
-TypeError: Data serialized with error but only able to deserialize data with ['msgpack']
-2022-08-26 14:04:16,913 - distributed.worker - ERROR - failed during get data with tcp://127.0.0.1:35093 -> None
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 225, in read
-    frames_nbytes = await stream.read_bytes(fmt_size)
-tornado.iostream.StreamClosedError: Stream is closed
-
-The above exception was the direct cause of the following exception:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 1696, in get_data
-    response = await comm.read(deserializers=serializers)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 241, in read
-    convert_stream_closed_error(self, e)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 144, in convert_stream_closed_error
-    raise CommClosedError(f"in {obj}: {exc}") from exc
-distributed.comm.core.CommClosedError: in <TCP (closed)  local=tcp://127.0.0.1:35093 remote=tcp://127.0.0.1:52504>: Stream is closed
-2022-08-26 14:04:17,107 - distributed.utils_perf - WARNING - full garbage collections took 77% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_client_repr_closed 2022-08-26 14:04:17,348 - distributed.utils_perf - WARNING - full garbage collections took 77% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_client_repr_closed_sync PASSED
-distributed/tests/test_client.py::test_nested_prioritization XFAIL (...)
-distributed/tests/test_client.py::test_scatter_error_cancel 2022-08-26 14:04:17,668 - distributed.worker - WARNING - Compute Failed
-Key:       bad_fn-c502699bcbe9b7c1f4d103d8f5545ef3
-Function:  bad_fn
-args:      (1)
-kwargs:    {}
-Exception: "Exception('lol')"
-
-2022-08-26 14:04:17,977 - distributed.utils_perf - WARNING - full garbage collections took 74% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_scatter_and_replicate_avoid_paused_workers[False-False-False] 2022-08-26 14:04:18,280 - distributed.utils_perf - WARNING - full garbage collections took 74% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_scatter_and_replicate_avoid_paused_workers[False-False-True] 2022-08-26 14:04:18,585 - distributed.utils_perf - WARNING - full garbage collections took 74% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_scatter_and_replicate_avoid_paused_workers[False-True-False] 2022-08-26 14:04:18,890 - distributed.utils_perf - WARNING - full garbage collections took 73% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_scatter_and_replicate_avoid_paused_workers[False-True-True] 2022-08-26 14:04:19,195 - distributed.utils_perf - WARNING - full garbage collections took 73% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_scatter_and_replicate_avoid_paused_workers[True-False-False] 2022-08-26 14:04:19,498 - distributed.utils_perf - WARNING - full garbage collections took 73% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_scatter_and_replicate_avoid_paused_workers[True-False-True] 2022-08-26 14:04:19,802 - distributed.utils_perf - WARNING - full garbage collections took 72% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_scatter_and_replicate_avoid_paused_workers[True-True-False] 2022-08-26 14:04:20,107 - distributed.utils_perf - WARNING - full garbage collections took 72% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_scatter_and_replicate_avoid_paused_workers[True-True-True] 2022-08-26 14:04:20,411 - distributed.utils_perf - WARNING - full garbage collections took 72% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_scatter_and_replicate_avoid_paused_workers[10-False-False] 2022-08-26 14:04:20,716 - distributed.utils_perf - WARNING - full garbage collections took 71% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_scatter_and_replicate_avoid_paused_workers[10-False-True] 2022-08-26 14:04:21,020 - distributed.utils_perf - WARNING - full garbage collections took 71% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_scatter_and_replicate_avoid_paused_workers[10-True-False] 2022-08-26 14:04:21,325 - distributed.utils_perf - WARNING - full garbage collections took 71% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_scatter_and_replicate_avoid_paused_workers[10-True-True] 2022-08-26 14:04:21,630 - distributed.utils_perf - WARNING - full garbage collections took 71% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_no_threads_lingering XPASS (G...)
-distributed/tests/test_client.py::test_direct_async 2022-08-26 14:04:21,885 - distributed.utils_perf - WARNING - full garbage collections took 71% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_direct_sync 2022-08-26 14:04:22,710 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:04:22,713 - distributed.scheduler - INFO - State start
-2022-08-26 14:04:22,715 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:04:22,716 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:45309
-2022-08-26 14:04:22,716 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:04:22,724 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42931
-2022-08-26 14:04:22,724 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42931
-2022-08-26 14:04:22,724 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38089
-2022-08-26 14:04:22,724 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38089
-2022-08-26 14:04:22,724 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45903
-2022-08-26 14:04:22,724 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40221
-2022-08-26 14:04:22,724 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:45309
-2022-08-26 14:04:22,724 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:45309
-2022-08-26 14:04:22,724 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:04:22,724 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:04:22,724 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:04:22,724 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:04:22,724 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:04:22,724 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:04:22,724 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-jc8ld7qr
-2022-08-26 14:04:22,724 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-oj92od00
-2022-08-26 14:04:22,724 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:04:22,724 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:04:22,979 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42931', status: init, memory: 0, processing: 0>
-2022-08-26 14:04:23,230 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42931
-2022-08-26 14:04:23,230 - distributed.core - INFO - Starting established connection
-2022-08-26 14:04:23,230 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:45309
-2022-08-26 14:04:23,231 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:04:23,231 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38089', status: init, memory: 0, processing: 0>
-2022-08-26 14:04:23,232 - distributed.core - INFO - Starting established connection
-2022-08-26 14:04:23,232 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38089
-2022-08-26 14:04:23,232 - distributed.core - INFO - Starting established connection
-2022-08-26 14:04:23,232 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:45309
-2022-08-26 14:04:23,232 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:04:23,233 - distributed.core - INFO - Starting established connection
-2022-08-26 14:04:23,238 - distributed.scheduler - INFO - Receive client connection: Client-a928c4b6-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:04:23,238 - distributed.core - INFO - Starting established connection
-2022-08-26 14:04:23,256 - distributed.scheduler - INFO - Receive client connection: Client-worker-a92b571c-2582-11ed-8e73-00d861bc4509
-2022-08-26 14:04:23,256 - distributed.core - INFO - Starting established connection
-PASSED2022-08-26 14:04:23,271 - distributed.scheduler - INFO - Remove client Client-a928c4b6-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:04:23,271 - distributed.scheduler - INFO - Remove client Client-a928c4b6-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:04:23,271 - distributed.scheduler - INFO - Close client connection: Client-a928c4b6-2582-11ed-a99d-00d861bc4509
-
-distributed/tests/test_client.py::test_mixing_clients 2022-08-26 14:04:23,519 - distributed.utils_perf - WARNING - full garbage collections took 71% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_tuple_keys 2022-08-26 14:04:23,766 - distributed.utils_perf - WARNING - full garbage collections took 71% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_multiple_scatter 2022-08-26 14:04:24,018 - distributed.utils_perf - WARNING - full garbage collections took 71% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_map_large_kwargs_in_graph 2022-08-26 14:04:24,303 - distributed.utils_perf - WARNING - full garbage collections took 73% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_retry 2022-08-26 14:04:24,346 - distributed.worker - WARNING - Compute Failed
-Key:       f-d8a946f7f54864fc604e84206bca94ee
-Function:  f
-args:      ()
-kwargs:    {}
-Exception: 'AssertionError("assert False\\n +  where False = <function get at 0x564036cfd200>(\'foo\')\\n +    where <function get at 0x564036cfd200> = <module \'dask.config\' from \'/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/dask/config.py\'>.get\\n +      where <module \'dask.config\' from \'/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/dask/config.py\'> = dask.config")'
-
-2022-08-26 14:04:24,550 - distributed.utils_perf - WARNING - full garbage collections took 73% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_retry_dependencies 2022-08-26 14:04:24,594 - distributed.worker - WARNING - Compute Failed
-Key:       f-0ba6d5b0a7ee0b21de7130ae016771ee
-Function:  f
-args:      ()
-kwargs:    {}
-Exception: "KeyError('foo')"
-
-2022-08-26 14:04:24,607 - distributed.worker - ERROR - Exception during execution of task inc-c715b8886596c0bdb7666703899a5d84.
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 2278, in _prepare_args_for_execution
-    data[k] = self.data[k]
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/zict/buffer.py", line 108, in __getitem__
-    raise KeyError(key)
-KeyError: 'f-0ba6d5b0a7ee0b21de7130ae016771ee'
-
-During handling of the above exception, another exception occurred:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 2176, in execute
-    args2, kwargs2 = self._prepare_args_for_execution(ts, args, kwargs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 2282, in _prepare_args_for_execution
-    data[k] = Actor(type(self.state.actors[k]), self.address, k, self)
-KeyError: 'f-0ba6d5b0a7ee0b21de7130ae016771ee'
-2022-08-26 14:04:24,811 - distributed.utils_perf - WARNING - full garbage collections took 73% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_released_dependencies 2022-08-26 14:04:24,860 - distributed.worker - WARNING - Compute Failed
-Key:       y
-Function:  f
-args:      (2)
-kwargs:    {}
-Exception: "KeyError('foo')"
-
-2022-08-26 14:04:25,068 - distributed.utils_perf - WARNING - full garbage collections took 72% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_profile_bokeh 2022-08-26 14:04:26,206 - distributed.utils_perf - WARNING - full garbage collections took 73% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_get_mix_futures_and_SubgraphCallable 2022-08-26 14:04:26,490 - distributed.utils_perf - WARNING - full garbage collections took 72% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_get_mix_futures_and_SubgraphCallable_dask_dataframe 2022-08-26 14:04:26,761 - distributed.utils_perf - WARNING - full garbage collections took 72% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_direct_to_workers 2022-08-26 14:04:27,593 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:04:27,595 - distributed.scheduler - INFO - State start
-2022-08-26 14:04:27,598 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:04:27,598 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:35953
-2022-08-26 14:04:27,598 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:04:27,606 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42533
-2022-08-26 14:04:27,606 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42533
-2022-08-26 14:04:27,606 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41837
-2022-08-26 14:04:27,606 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35223
-2022-08-26 14:04:27,606 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35953
-2022-08-26 14:04:27,606 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35223
-2022-08-26 14:04:27,606 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:04:27,606 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34107
-2022-08-26 14:04:27,606 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:04:27,606 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35953
-2022-08-26 14:04:27,606 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:04:27,606 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:04:27,606 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-zm99bh9g
-2022-08-26 14:04:27,606 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:04:27,606 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:04:27,606 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:04:27,606 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-kt8zgpo3
-2022-08-26 14:04:27,606 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:04:27,863 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:35223', status: init, memory: 0, processing: 0>
-2022-08-26 14:04:28,118 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:35223
-2022-08-26 14:04:28,118 - distributed.core - INFO - Starting established connection
-2022-08-26 14:04:28,118 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35953
-2022-08-26 14:04:28,118 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:04:28,119 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42533', status: init, memory: 0, processing: 0>
-2022-08-26 14:04:28,119 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42533
-2022-08-26 14:04:28,119 - distributed.core - INFO - Starting established connection
-2022-08-26 14:04:28,119 - distributed.core - INFO - Starting established connection
-2022-08-26 14:04:28,119 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35953
-2022-08-26 14:04:28,120 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:04:28,120 - distributed.core - INFO - Starting established connection
-2022-08-26 14:04:28,125 - distributed.scheduler - INFO - Receive client connection: Client-ac12955b-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:04:28,125 - distributed.core - INFO - Starting established connection
-2022-08-26 14:04:28,131 - distributed.worker - INFO - Run out-of-band function 'lambda'
-2022-08-26 14:04:28,137 - distributed.scheduler - INFO - Remove client Client-ac12955b-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:04:28,137 - distributed.scheduler - INFO - Remove client Client-ac12955b-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:04:28,138 - distributed.scheduler - INFO - Close client connection: Client-ac12955b-2582-11ed-a99d-00d861bc4509
-PASSED
-distributed/tests/test_client.py::test_instances 2022-08-26 14:04:28,383 - distributed.utils_perf - WARNING - full garbage collections took 72% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_wait_for_workers 2022-08-26 14:04:29,016 - distributed.utils_perf - WARNING - full garbage collections took 73% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_file_descriptors_dont_leak[Worker] PASSED
-distributed/tests/test_client.py::test_file_descriptors_dont_leak[Nanny] 2022-08-26 14:04:29,831 - distributed.worker - INFO -       Start worker at:  tcp://192.168.1.159:35173
-2022-08-26 14:04:29,831 - distributed.worker - INFO -          Listening to:  tcp://192.168.1.159:35173
-2022-08-26 14:04:29,831 - distributed.worker - INFO -          dashboard at:        192.168.1.159:42213
-2022-08-26 14:04:29,831 - distributed.worker - INFO - Waiting to connect to:  tcp://192.168.1.159:35193
-2022-08-26 14:04:29,831 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:04:29,831 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:04:29,831 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:04:29,831 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-m9eb25rm
-2022-08-26 14:04:29,831 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:04:30,093 - distributed.worker - INFO -         Registered to:  tcp://192.168.1.159:35193
-2022-08-26 14:04:30,093 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:04:30,094 - distributed.core - INFO - Starting established connection
-2022-08-26 14:04:30,705 - distributed.worker - INFO -       Start worker at:  tcp://192.168.1.159:40703
-2022-08-26 14:04:30,705 - distributed.worker - INFO -          Listening to:  tcp://192.168.1.159:40703
-2022-08-26 14:04:30,705 - distributed.worker - INFO -          dashboard at:        192.168.1.159:36395
-2022-08-26 14:04:30,705 - distributed.worker - INFO - Waiting to connect to:  tcp://192.168.1.159:35193
-2022-08-26 14:04:30,706 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:04:30,706 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:04:30,706 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:04:30,706 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-avabchqt
-2022-08-26 14:04:30,706 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:04:30,947 - distributed.worker - INFO -         Registered to:  tcp://192.168.1.159:35193
-2022-08-26 14:04:30,947 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:04:30,947 - distributed.core - INFO - Starting established connection
-2022-08-26 14:04:31,303 - distributed.worker - INFO - Stopping worker at tcp://192.168.1.159:40703
-2022-08-26 14:04:31,304 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-a956e126-a2ed-44c7-a68c-9e6c65ed40a7 Address tcp://192.168.1.159:40703 Status: Status.closing
-2022-08-26 14:04:31,458 - distributed.worker - INFO - Stopping worker at tcp://192.168.1.159:35173
-2022-08-26 14:04:31,459 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-9fa40978-9245-4fd8-ad1f-7dfeef86968f Address tcp://192.168.1.159:35173 Status: Status.closing
-PASSED
-distributed/tests/test_client.py::test_dashboard_link_cluster PASSED
-distributed/tests/test_client.py::test_shutdown PASSED
-distributed/tests/test_client.py::test_shutdown_localcluster 2022-08-26 14:04:33,860 - distributed.client - ERROR - 
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/client.py", line 1427, in _handle_report
-    msgs = await self.scheduler_comm.comm.read()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/inproc.py", line 211, in read
-    raise CommClosedError()
-distributed.comm.core.CommClosedError
-
-During handling of the above exception, another exception occurred:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/core.py", line 291, in connect
-    comm = await asyncio.wait_for(
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/tasks.py", line 445, in wait_for
-    return fut.result()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/inproc.py", line 319, in connect
-    raise OSError(f"no endpoint for inproc address {address!r}")
-OSError: no endpoint for inproc address '192.168.1.159/518557/832'
-
-During handling of the above exception, another exception occurred:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/utils.py", line 799, in wrapper
-    return await func(*args, **kwargs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/client.py", line 1246, in _reconnect
-    await self._ensure_connected(timeout=timeout)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/client.py", line 1276, in _ensure_connected
-    comm = await connect(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/core.py", line 315, in connect
-    await asyncio.sleep(backoff)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/tasks.py", line 605, in sleep
-    return await future
-asyncio.exceptions.CancelledError
-PASSED
-distributed/tests/test_client.py::test_config_inherited_by_subprocess PASSED
-distributed/tests/test_client.py::test_futures_of_sorted 2022-08-26 14:04:35,277 - distributed.utils_perf - WARNING - full garbage collections took 69% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_profile_server 2022-08-26 14:04:37,187 - distributed.utils_perf - WARNING - full garbage collections took 69% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_profile_server_disabled 2022-08-26 14:04:39,090 - distributed.utils_perf - WARNING - full garbage collections took 69% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_await_future 2022-08-26 14:04:39,144 - distributed.worker - WARNING - Compute Failed
-Key:       div-beaac0206246b34d3625d21194e03c13
-Function:  div
-args:      (1, 0)
-kwargs:    {}
-Exception: "ZeroDivisionError('division by zero')"
-
-2022-08-26 14:04:39,350 - distributed.utils_perf - WARNING - full garbage collections took 69% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_as_completed_async_for 2022-08-26 14:04:39,622 - distributed.utils_perf - WARNING - full garbage collections took 70% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_as_completed_async_for_results 2022-08-26 14:04:39,890 - distributed.utils_perf - WARNING - full garbage collections took 71% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_as_completed_async_for_cancel 2022-08-26 14:04:40,139 - distributed.utils_perf - WARNING - full garbage collections took 71% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_async_with PASSED
-distributed/tests/test_client.py::test_client_sync_with_async_def 2022-08-26 14:04:41,034 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:04:41,036 - distributed.scheduler - INFO - State start
-2022-08-26 14:04:41,039 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:04:41,039 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:43691
-2022-08-26 14:04:41,039 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:04:41,048 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:44997
-2022-08-26 14:04:41,048 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:44997
-2022-08-26 14:04:41,048 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33151
-2022-08-26 14:04:41,048 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43691
-2022-08-26 14:04:41,048 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:04:41,048 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:04:41,048 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42595
-2022-08-26 14:04:41,048 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:04:41,048 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-debgyj3e
-2022-08-26 14:04:41,048 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42595
-2022-08-26 14:04:41,048 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:04:41,049 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42077
-2022-08-26 14:04:41,049 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43691
-2022-08-26 14:04:41,049 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:04:41,049 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:04:41,049 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:04:41,049 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-763ha5ml
-2022-08-26 14:04:41,049 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:04:41,308 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:44997', status: init, memory: 0, processing: 0>
-2022-08-26 14:04:41,562 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:44997
-2022-08-26 14:04:41,562 - distributed.core - INFO - Starting established connection
-2022-08-26 14:04:41,562 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43691
-2022-08-26 14:04:41,562 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:04:41,562 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42595', status: init, memory: 0, processing: 0>
-2022-08-26 14:04:41,563 - distributed.core - INFO - Starting established connection
-2022-08-26 14:04:41,563 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42595
-2022-08-26 14:04:41,563 - distributed.core - INFO - Starting established connection
-2022-08-26 14:04:41,563 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43691
-2022-08-26 14:04:41,563 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:04:41,564 - distributed.core - INFO - Starting established connection
-2022-08-26 14:04:41,570 - distributed.scheduler - INFO - Receive client connection: Client-b415e49a-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:04:41,570 - distributed.core - INFO - Starting established connection
-2022-08-26 14:04:41,591 - distributed.scheduler - INFO - Remove client Client-b415e49a-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:04:41,592 - distributed.scheduler - INFO - Remove client Client-b415e49a-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:04:41,592 - distributed.scheduler - INFO - Close client connection: Client-b415e49a-2582-11ed-a99d-00d861bc4509
-PASSED
-distributed/tests/test_client.py::test_dont_hold_on_to_large_messages SKIPPED
-distributed/tests/test_client.py::test_run_on_scheduler_async_def 2022-08-26 14:04:41,853 - distributed.utils_perf - WARNING - full garbage collections took 71% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_run_on_scheduler_async_def_wait 2022-08-26 14:04:42,104 - distributed.utils_perf - WARNING - full garbage collections took 72% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_performance_report 2022-08-26 14:04:47,478 - distributed.utils_perf - WARNING - full garbage collections took 45% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_client_gather_semaphore_loop SKIPPED
-distributed/tests/test_client.py::test_as_completed_condition_loop 2022-08-26 14:04:47,755 - distributed.utils_perf - WARNING - full garbage collections took 45% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_client_connectionpool_semaphore_loop SKIPPED
-distributed/tests/test_client.py::test_mixed_compression SKIPPED (ne...)
-distributed/tests/test_client.py::test_futures_in_subgraphs 2022-08-26 14:04:48,134 - distributed.utils_perf - WARNING - full garbage collections took 45% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_get_task_metadata 2022-08-26 14:04:48,408 - distributed.utils_perf - WARNING - full garbage collections took 45% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_get_task_metadata_multiple 2022-08-26 14:04:48,708 - distributed.utils_perf - WARNING - full garbage collections took 45% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_register_worker_plugin_exception 2022-08-26 14:04:48,945 - distributed.utils_perf - WARNING - full garbage collections took 45% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_log_event 2022-08-26 14:04:49,189 - distributed.utils_perf - WARNING - full garbage collections took 46% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_annotations_task_state 2022-08-26 14:04:49,439 - distributed.utils_perf - WARNING - full garbage collections took 46% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_annotations_compute_time[compute] 2022-08-26 14:04:49,703 - distributed.utils_perf - WARNING - full garbage collections took 46% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_annotations_compute_time[persist] 2022-08-26 14:04:49,953 - distributed.utils_perf - WARNING - full garbage collections took 46% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_annotations_survive_optimization XFAIL
-distributed/tests/test_client.py::test_annotations_priorities 2022-08-26 14:04:50,407 - distributed.utils_perf - WARNING - full garbage collections took 45% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_annotations_workers 2022-08-26 14:04:50,657 - distributed.utils_perf - WARNING - full garbage collections took 45% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_annotations_retries 2022-08-26 14:04:50,908 - distributed.utils_perf - WARNING - full garbage collections took 45% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_annotations_blockwise_unpack 2022-08-26 14:04:50,953 - distributed.worker - WARNING - Compute Failed
-Key:       ('reliable_double-7616fc371e7f1a1495dccc9d7b544d5b', 0)
-Function:  subgraph_callable-d05bef6d-02d4-4ab2-aea2-fc3b8ce2
-args:      ((5,))
-kwargs:    {}
-Exception: "ZeroDivisionError('one')"
-
-2022-08-26 14:04:50,955 - distributed.worker - WARNING - Compute Failed
-Key:       ('reliable_double-7616fc371e7f1a1495dccc9d7b544d5b', 1)
-Function:  subgraph_callable-d05bef6d-02d4-4ab2-aea2-fc3b8ce2
-args:      ((5,))
-kwargs:    {}
-Exception: "ZeroDivisionError('two')"
-
-2022-08-26 14:04:51,177 - distributed.utils_perf - WARNING - full garbage collections took 45% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_annotations_resources 2022-08-26 14:04:51,425 - distributed.utils_perf - WARNING - full garbage collections took 45% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_annotations_resources_culled 2022-08-26 14:04:51,694 - distributed.utils_perf - WARNING - full garbage collections took 45% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_annotations_loose_restrictions 2022-08-26 14:04:51,943 - distributed.utils_perf - WARNING - full garbage collections took 46% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_workers_collection_restriction 2022-08-26 14:04:52,193 - distributed.utils_perf - WARNING - full garbage collections took 46% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_get_client_functions_spawn_clusters 2022-08-26 14:04:52,233 - distributed.worker - WARNING - Compute Failed
-Key:       f-c3db9a2b35ad81b80b6a4cae2ef1a59d
-Function:  f
-args:      (0)
-kwargs:    {}
-Exception: "DeprecationWarning('make_current is deprecated; start the event loop first')"
-
-2022-08-26 14:04:52,332 - distributed.worker - WARNING - Compute Failed
-Key:       f-84baab4a4da2e60584759ed6c778a5f2
-Function:  f
-args:      (1)
-kwargs:    {}
-Exception: "DeprecationWarning('make_current is deprecated; start the event loop first')"
-
-Dumped cluster state to test_cluster_dump/test_get_client_functions_spawn_clusters.yaml
-FAILED
-distributed/tests/test_client.py::test_computation_code_walk_frames PASSED
-distributed/tests/test_client.py::test_computation_object_code_dask_compute 2022-08-26 14:04:53,216 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:04:53,218 - distributed.scheduler - INFO - State start
-2022-08-26 14:04:53,221 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:04:53,221 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:43581
-2022-08-26 14:04:53,221 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:04:53,229 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38843
-2022-08-26 14:04:53,229 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38843
-2022-08-26 14:04:53,229 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42637
-2022-08-26 14:04:53,229 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42637
-2022-08-26 14:04:53,229 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46609
-2022-08-26 14:04:53,229 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34439
-2022-08-26 14:04:53,229 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43581
-2022-08-26 14:04:53,230 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:04:53,230 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43581
-2022-08-26 14:04:53,230 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:04:53,230 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:04:53,230 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:04:53,230 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:04:53,230 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-8fjwbkrc
-2022-08-26 14:04:53,230 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:04:53,230 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:04:53,230 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-mfdguzmh
-2022-08-26 14:04:53,230 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:04:53,492 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38843', status: init, memory: 0, processing: 0>
-2022-08-26 14:04:53,743 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38843
-2022-08-26 14:04:53,743 - distributed.core - INFO - Starting established connection
-2022-08-26 14:04:53,743 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43581
-2022-08-26 14:04:53,744 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:04:53,744 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42637', status: init, memory: 0, processing: 0>
-2022-08-26 14:04:53,744 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42637
-2022-08-26 14:04:53,744 - distributed.core - INFO - Starting established connection
-2022-08-26 14:04:53,745 - distributed.core - INFO - Starting established connection
-2022-08-26 14:04:53,745 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43581
-2022-08-26 14:04:53,745 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:04:53,745 - distributed.core - INFO - Starting established connection
-2022-08-26 14:04:53,750 - distributed.scheduler - INFO - Receive client connection: Client-bb58a35d-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:04:53,751 - distributed.core - INFO - Starting established connection
-2022-08-26 14:04:53,793 - distributed.worker - INFO - Run out-of-band function 'fetch_comp_code'
-PASSED2022-08-26 14:04:53,802 - distributed.scheduler - INFO - Remove client Client-bb58a35d-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:04:53,802 - distributed.scheduler - INFO - Remove client Client-bb58a35d-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:04:53,803 - distributed.scheduler - INFO - Close client connection: Client-bb58a35d-2582-11ed-a99d-00d861bc4509
-
-distributed/tests/test_client.py::test_computation_object_code_not_available 2022-08-26 14:04:54,649 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:04:54,651 - distributed.scheduler - INFO - State start
-2022-08-26 14:04:54,654 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:04:54,654 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:43113
-2022-08-26 14:04:54,654 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:04:54,656 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-mfdguzmh', purging
-2022-08-26 14:04:54,657 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-8fjwbkrc', purging
-2022-08-26 14:04:54,662 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35239
-2022-08-26 14:04:54,662 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35239
-2022-08-26 14:04:54,662 - distributed.worker - INFO -          dashboard at:            127.0.0.1:44673
-2022-08-26 14:04:54,662 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43113
-2022-08-26 14:04:54,662 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:04:54,662 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:04:54,662 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:04:54,663 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-9lo5i05w
-2022-08-26 14:04:54,663 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:04:54,663 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37429
-2022-08-26 14:04:54,663 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37429
-2022-08-26 14:04:54,663 - distributed.worker - INFO -          dashboard at:            127.0.0.1:44811
-2022-08-26 14:04:54,663 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43113
-2022-08-26 14:04:54,663 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:04:54,663 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:04:54,663 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:04:54,663 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-q6v68333
-2022-08-26 14:04:54,663 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:04:54,924 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37429', status: init, memory: 0, processing: 0>
-2022-08-26 14:04:55,176 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37429
-2022-08-26 14:04:55,176 - distributed.core - INFO - Starting established connection
-2022-08-26 14:04:55,176 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43113
-2022-08-26 14:04:55,177 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:04:55,177 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:35239', status: init, memory: 0, processing: 0>
-2022-08-26 14:04:55,177 - distributed.core - INFO - Starting established connection
-2022-08-26 14:04:55,177 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:35239
-2022-08-26 14:04:55,177 - distributed.core - INFO - Starting established connection
-2022-08-26 14:04:55,178 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43113
-2022-08-26 14:04:55,178 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:04:55,178 - distributed.core - INFO - Starting established connection
-2022-08-26 14:04:55,184 - distributed.scheduler - INFO - Receive client connection: Client-bc334f7a-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:04:55,184 - distributed.core - INFO - Starting established connection
-2022-08-26 14:04:55,218 - distributed.worker - INFO - Run out-of-band function 'fetch_comp_code'
-PASSED2022-08-26 14:04:55,227 - distributed.scheduler - INFO - Remove client Client-bc334f7a-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:04:55,227 - distributed.scheduler - INFO - Remove client Client-bc334f7a-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:04:55,227 - distributed.scheduler - INFO - Close client connection: Client-bc334f7a-2582-11ed-a99d-00d861bc4509
-
-distributed/tests/test_client.py::test_computation_object_code_dask_persist 2022-08-26 14:04:55,518 - distributed.utils_perf - WARNING - full garbage collections took 45% CPU time recently (threshold: 10%)
-FAILED
-distributed/tests/test_client.py::test_computation_object_code_client_submit_simple 2022-08-26 14:04:55,798 - distributed.utils_perf - WARNING - full garbage collections took 45% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_computation_object_code_client_submit_list_comp 2022-08-26 14:04:56,066 - distributed.utils_perf - WARNING - full garbage collections took 45% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_computation_object_code_client_submit_dict_comp 2022-08-26 14:04:56,335 - distributed.utils_perf - WARNING - full garbage collections took 47% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_computation_object_code_client_map 2022-08-26 14:04:56,628 - distributed.utils_perf - WARNING - full garbage collections took 47% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_computation_object_code_client_compute 2022-08-26 14:04:56,925 - distributed.utils_perf - WARNING - full garbage collections took 47% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_upload_directory 2022-08-26 14:04:57,546 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42917
-2022-08-26 14:04:57,546 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42917
-2022-08-26 14:04:57,546 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:04:57,546 - distributed.worker - INFO -          dashboard at:            127.0.0.1:38897
-2022-08-26 14:04:57,546 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43245
-2022-08-26 14:04:57,546 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:04:57,546 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:04:57,547 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:04:57,547 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-vtfn9uo3
-2022-08-26 14:04:57,547 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:04:57,549 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43125
-2022-08-26 14:04:57,549 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43125
-2022-08-26 14:04:57,549 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:04:57,549 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45387
-2022-08-26 14:04:57,549 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43245
-2022-08-26 14:04:57,549 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:04:57,549 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:04:57,549 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:04:57,549 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-qchlm2b9
-2022-08-26 14:04:57,549 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:04:57,796 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43245
-2022-08-26 14:04:57,797 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:04:57,797 - distributed.core - INFO - Starting established connection
-2022-08-26 14:04:57,813 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43245
-2022-08-26 14:04:57,814 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:04:57,814 - distributed.core - INFO - Starting established connection
-2022-08-26 14:04:57,836 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:43125
-2022-08-26 14:04:57,836 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42917
-2022-08-26 14:04:57,837 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-e2c43699-2b45-4a81-95cf-6e5d1e3ed19d Address tcp://127.0.0.1:43125 Status: Status.closing
-2022-08-26 14:04:57,837 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-3a6bc1c6-3da1-426f-b7ed-35d10d27ab69 Address tcp://127.0.0.1:42917 Status: Status.closing
-2022-08-26 14:04:57,968 - distributed.nanny - WARNING - Restarting worker
-2022-08-26 14:04:57,971 - distributed.nanny - WARNING - Restarting worker
-2022-08-26 14:04:58,575 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43689
-2022-08-26 14:04:58,575 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43689
-2022-08-26 14:04:58,575 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:04:58,575 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36375
-2022-08-26 14:04:58,575 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:32943
-2022-08-26 14:04:58,575 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43245
-2022-08-26 14:04:58,575 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:04:58,575 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:32943
-2022-08-26 14:04:58,575 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:04:58,575 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:04:58,575 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:04:58,575 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41525
-2022-08-26 14:04:58,575 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-r5105d2b
-2022-08-26 14:04:58,575 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43245
-2022-08-26 14:04:58,575 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:04:58,575 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:04:58,575 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:04:58,575 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:04:58,575 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-hx994ei3
-2022-08-26 14:04:58,575 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:04:58,836 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43245
-2022-08-26 14:04:58,836 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:04:58,837 - distributed.core - INFO - Starting established connection
-2022-08-26 14:04:58,837 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43245
-2022-08-26 14:04:58,837 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:04:58,837 - distributed.core - INFO - Starting established connection
-2022-08-26 14:04:58,860 - distributed.worker - INFO - Run out-of-band function 'f'
-2022-08-26 14:04:58,860 - distributed.worker - INFO - Run out-of-band function 'f'
-2022-08-26 14:04:59,472 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35869
-2022-08-26 14:04:59,473 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35869
-2022-08-26 14:04:59,473 - distributed.worker - INFO -           Worker name:                        foo
-2022-08-26 14:04:59,473 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35905
-2022-08-26 14:04:59,473 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43245
-2022-08-26 14:04:59,473 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:04:59,473 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:04:59,473 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:04:59,473 - distributed.worker - INFO -       Local Directory: /tmp/pytest-of-matthew/pytest-12/test_upload_directory0/foo/dask-worker-space/worker-xz3xfm8s
-2022-08-26 14:04:59,473 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:04:59,719 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43245
-2022-08-26 14:04:59,719 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:04:59,720 - distributed.core - INFO - Starting established connection
-2022-08-26 14:04:59,758 - distributed.worker - INFO - Run out-of-band function 'f'
-2022-08-26 14:04:59,758 - distributed.worker - INFO - Run out-of-band function 'f'
-2022-08-26 14:04:59,758 - distributed.worker - INFO - Run out-of-band function 'f'
-2022-08-26 14:04:59,760 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:35869
-2022-08-26 14:04:59,761 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-ae2043b4-42a9-40fb-98cc-3644fdf72770 Address tcp://127.0.0.1:35869 Status: Status.closing
-2022-08-26 14:04:59,890 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:32943
-2022-08-26 14:04:59,890 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:43689
-2022-08-26 14:04:59,891 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-23912974-3a02-466f-9129-4da5317f057c Address tcp://127.0.0.1:43689 Status: Status.closing
-2022-08-26 14:04:59,891 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-e8b1884e-c8a6-47e5-8cbe-6d81c7db357f Address tcp://127.0.0.1:32943 Status: Status.closing
-2022-08-26 14:05:00,217 - distributed.utils_perf - WARNING - full garbage collections took 46% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_exception_text 2022-08-26 14:05:00,260 - distributed.worker - WARNING - Compute Failed
-Key:       bad-c9b52d7e3f576b60bbd4149caa3f49b4
-Function:  bad
-args:      (123)
-kwargs:    {}
-Exception: 'Exception(123)'
-
-2022-08-26 14:05:00,466 - distributed.utils_perf - WARNING - full garbage collections took 46% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_async_task 2022-08-26 14:05:00,715 - distributed.utils_perf - WARNING - full garbage collections took 47% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_async_task_with_partial 2022-08-26 14:05:00,965 - distributed.utils_perf - WARNING - full garbage collections took 47% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_events_subscribe_topic 2022-08-26 14:05:01,261 - distributed.utils_perf - WARNING - full garbage collections took 47% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_events_subscribe_topic_cancelled 2022-08-26 14:05:01,607 - distributed.utils_perf - WARNING - full garbage collections took 47% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_events_all_servers_use_same_channel 2022-08-26 14:05:02,263 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42155
-2022-08-26 14:05:02,263 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42155
-2022-08-26 14:05:02,263 - distributed.worker - INFO -          dashboard at:            127.0.0.1:38211
-2022-08-26 14:05:02,263 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43137
-2022-08-26 14:05:02,263 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:02,263 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:05:02,263 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:05:02,263 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-xpyd96kv
-2022-08-26 14:05:02,263 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:02,524 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43137
-2022-08-26 14:05:02,524 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:02,524 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:02,547 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42155
-2022-08-26 14:05:02,548 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-cf9e9b17-3685-4979-8505-1ef429ba11ac Address tcp://127.0.0.1:42155 Status: Status.closing
-2022-08-26 14:05:02,871 - distributed.utils_perf - WARNING - full garbage collections took 77% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_events_unsubscribe_raises_if_unknown 2022-08-26 14:05:03,086 - distributed.utils_perf - WARNING - full garbage collections took 77% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_log_event_warn 2022-08-26 14:05:03,336 - distributed.utils_perf - WARNING - full garbage collections took 78% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_log_event_warn_dask_warns 2022-08-26 14:05:03,586 - distributed.utils_perf - WARNING - full garbage collections took 78% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_print 2022-08-26 14:05:04,207 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34931
-2022-08-26 14:05:04,207 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34931
-2022-08-26 14:05:04,207 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:05:04,207 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46665
-2022-08-26 14:05:04,207 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35761
-2022-08-26 14:05:04,207 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:04,207 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:05:04,207 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:05:04,207 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-506y4jgj
-2022-08-26 14:05:04,207 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:04,210 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34397
-2022-08-26 14:05:04,210 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34397
-2022-08-26 14:05:04,210 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:05:04,210 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34669
-2022-08-26 14:05:04,210 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35761
-2022-08-26 14:05:04,210 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:04,210 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:05:04,210 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:05:04,210 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-sl64c9az
-2022-08-26 14:05:04,210 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:04,456 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35761
-2022-08-26 14:05:04,456 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:04,456 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:04,469 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35761
-2022-08-26 14:05:04,469 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:04,469 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:04,516 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:34931
-2022-08-26 14:05:04,516 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:34397
-2022-08-26 14:05:04,517 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-20d58154-539e-4f5d-9dd3-8c968c20ce0c Address tcp://127.0.0.1:34931 Status: Status.closing
-2022-08-26 14:05:04,517 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-94da9c28-94fe-43a7-9266-2c13bd352625 Address tcp://127.0.0.1:34397 Status: Status.closing
-Hello!:123
-2022-08-26 14:05:04,842 - distributed.utils_perf - WARNING - full garbage collections took 78% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_print_non_msgpack_serializable 2022-08-26 14:05:05,471 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35079
-2022-08-26 14:05:05,471 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35079
-2022-08-26 14:05:05,471 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:05:05,471 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42565
-2022-08-26 14:05:05,471 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36687
-2022-08-26 14:05:05,471 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:05,471 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:05:05,471 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:05:05,471 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-c4hlcx3n
-2022-08-26 14:05:05,471 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:05,474 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39593
-2022-08-26 14:05:05,474 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39593
-2022-08-26 14:05:05,474 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:05:05,474 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40345
-2022-08-26 14:05:05,474 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36687
-2022-08-26 14:05:05,474 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:05,475 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:05:05,475 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:05:05,475 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-7i5lku_y
-2022-08-26 14:05:05,475 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:05,723 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36687
-2022-08-26 14:05:05,723 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:05,724 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:05,737 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36687
-2022-08-26 14:05:05,737 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:05,737 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:05,770 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:39593
-2022-08-26 14:05:05,770 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:35079
-2022-08-26 14:05:05,770 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-cdbbd9e9-241e-4409-bf7f-814997ae7908 Address tcp://127.0.0.1:39593 Status: Status.closing
-2022-08-26 14:05:05,771 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-58223775-da0f-4d97-8893-a4a114cbc3dc Address tcp://127.0.0.1:35079 Status: Status.closing
-<object object at 0x5602678f51e0>
-2022-08-26 14:05:06,094 - distributed.utils_perf - WARNING - full garbage collections took 78% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_print_simple PASSED
-distributed/tests/test_client.py::test_dump_cluster_state_write_from_scheduler 2022-08-26 14:05:06,934 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:05:06,937 - distributed.scheduler - INFO - State start
-2022-08-26 14:05:06,939 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:05:06,940 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:36281
-2022-08-26 14:05:06,940 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:05:06,948 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45695
-2022-08-26 14:05:06,948 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45695
-2022-08-26 14:05:06,948 - distributed.worker - INFO -          dashboard at:            127.0.0.1:38305
-2022-08-26 14:05:06,948 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:41321
-2022-08-26 14:05:06,948 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36281
-2022-08-26 14:05:06,948 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:06,948 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:41321
-2022-08-26 14:05:06,948 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:05:06,948 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43275
-2022-08-26 14:05:06,948 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:05:06,948 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36281
-2022-08-26 14:05:06,948 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-pkbtwtb2
-2022-08-26 14:05:06,948 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:06,948 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:06,948 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:05:06,948 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:05:06,948 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-yvf6vbff
-2022-08-26 14:05:06,948 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:07,215 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:41321', status: init, memory: 0, processing: 0>
-2022-08-26 14:05:07,471 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:41321
-2022-08-26 14:05:07,471 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:07,471 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36281
-2022-08-26 14:05:07,471 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:07,471 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45695', status: init, memory: 0, processing: 0>
-2022-08-26 14:05:07,472 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45695
-2022-08-26 14:05:07,472 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:07,472 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:07,472 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36281
-2022-08-26 14:05:07,472 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:07,473 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:07,478 - distributed.scheduler - INFO - Receive client connection: Client-c3875413-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:07,478 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:07,484 - distributed.worker - INFO - Run out-of-band function 'chdir'
-PASSED2022-08-26 14:05:07,518 - distributed.scheduler - INFO - Remove client Client-c3875413-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:07,518 - distributed.scheduler - INFO - Remove client Client-c3875413-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:07,518 - distributed.scheduler - INFO - Close client connection: Client-c3875413-2582-11ed-a99d-00d861bc4509
-
-distributed/tests/test_client.py::test_dump_cluster_state_sync[msgpack-True] 2022-08-26 14:05:08,375 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:05:08,377 - distributed.scheduler - INFO - State start
-2022-08-26 14:05:08,380 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:05:08,380 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:43141
-2022-08-26 14:05:08,380 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:05:08,383 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-pkbtwtb2', purging
-2022-08-26 14:05:08,383 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-yvf6vbff', purging
-2022-08-26 14:05:08,389 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38571
-2022-08-26 14:05:08,389 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38571
-2022-08-26 14:05:08,389 - distributed.worker - INFO -          dashboard at:            127.0.0.1:38683
-2022-08-26 14:05:08,389 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43141
-2022-08-26 14:05:08,389 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:08,389 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:05:08,389 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:05:08,389 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-lsq4hm_7
-2022-08-26 14:05:08,389 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:46387
-2022-08-26 14:05:08,389 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:08,389 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:46387
-2022-08-26 14:05:08,389 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35967
-2022-08-26 14:05:08,389 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43141
-2022-08-26 14:05:08,389 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:08,389 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:05:08,389 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:05:08,389 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-lv2uhzuk
-2022-08-26 14:05:08,389 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:08,653 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:46387', status: init, memory: 0, processing: 0>
-2022-08-26 14:05:08,908 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:46387
-2022-08-26 14:05:08,908 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:08,908 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43141
-2022-08-26 14:05:08,908 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:08,908 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38571', status: init, memory: 0, processing: 0>
-2022-08-26 14:05:08,909 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:08,909 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38571
-2022-08-26 14:05:08,909 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:08,909 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43141
-2022-08-26 14:05:08,909 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:08,910 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:08,915 - distributed.scheduler - INFO - Receive client connection: Client-c4629892-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:08,915 - distributed.core - INFO - Starting established connection
-PASSED2022-08-26 14:05:08,927 - distributed.scheduler - INFO - Remove client Client-c4629892-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:08,927 - distributed.scheduler - INFO - Remove client Client-c4629892-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:08,927 - distributed.scheduler - INFO - Close client connection: Client-c4629892-2582-11ed-a99d-00d861bc4509
-
-distributed/tests/test_client.py::test_dump_cluster_state_sync[msgpack-False] 2022-08-26 14:05:09,786 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:05:09,788 - distributed.scheduler - INFO - State start
-2022-08-26 14:05:09,791 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:05:09,792 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:45943
-2022-08-26 14:05:09,792 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:05:09,794 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-lsq4hm_7', purging
-2022-08-26 14:05:09,794 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-lv2uhzuk', purging
-2022-08-26 14:05:09,800 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42743
-2022-08-26 14:05:09,800 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42743
-2022-08-26 14:05:09,800 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43927
-2022-08-26 14:05:09,800 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:45943
-2022-08-26 14:05:09,800 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:09,800 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:05:09,800 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:05:09,800 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-5w3_gbhx
-2022-08-26 14:05:09,800 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:09,800 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42277
-2022-08-26 14:05:09,800 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42277
-2022-08-26 14:05:09,800 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39765
-2022-08-26 14:05:09,800 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:45943
-2022-08-26 14:05:09,800 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:09,800 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:05:09,801 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:05:09,801 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-q0oxht36
-2022-08-26 14:05:09,801 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:10,067 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42277', status: init, memory: 0, processing: 0>
-2022-08-26 14:05:10,324 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42277
-2022-08-26 14:05:10,324 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:10,324 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:45943
-2022-08-26 14:05:10,325 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:10,325 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42743', status: init, memory: 0, processing: 0>
-2022-08-26 14:05:10,325 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42743
-2022-08-26 14:05:10,325 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:10,325 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:10,326 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:45943
-2022-08-26 14:05:10,326 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:10,326 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:10,331 - distributed.scheduler - INFO - Receive client connection: Client-c53abc14-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:10,332 - distributed.core - INFO - Starting established connection
-PASSED2022-08-26 14:05:10,345 - distributed.scheduler - INFO - Remove client Client-c53abc14-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:10,345 - distributed.scheduler - INFO - Remove client Client-c53abc14-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:10,345 - distributed.scheduler - INFO - Close client connection: Client-c53abc14-2582-11ed-a99d-00d861bc4509
-
-distributed/tests/test_client.py::test_dump_cluster_state_sync[yaml-True] 2022-08-26 14:05:11,198 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:05:11,200 - distributed.scheduler - INFO - State start
-2022-08-26 14:05:11,203 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:05:11,203 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:39753
-2022-08-26 14:05:11,203 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:05:11,205 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-q0oxht36', purging
-2022-08-26 14:05:11,205 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-5w3_gbhx', purging
-2022-08-26 14:05:11,211 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35489
-2022-08-26 14:05:11,211 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35489
-2022-08-26 14:05:11,211 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46723
-2022-08-26 14:05:11,211 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39753
-2022-08-26 14:05:11,211 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:11,211 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:05:11,211 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:05:11,211 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-mrexdq6t
-2022-08-26 14:05:11,211 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:11,211 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40109
-2022-08-26 14:05:11,211 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40109
-2022-08-26 14:05:11,212 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45727
-2022-08-26 14:05:11,212 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39753
-2022-08-26 14:05:11,212 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:11,212 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:05:11,212 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:05:11,212 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-gk8f654u
-2022-08-26 14:05:11,212 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:11,475 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40109', status: init, memory: 0, processing: 0>
-2022-08-26 14:05:11,734 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40109
-2022-08-26 14:05:11,734 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:11,734 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39753
-2022-08-26 14:05:11,735 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:11,735 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:35489', status: init, memory: 0, processing: 0>
-2022-08-26 14:05:11,735 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:35489
-2022-08-26 14:05:11,735 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:11,735 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:11,735 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39753
-2022-08-26 14:05:11,736 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:11,736 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:11,741 - distributed.scheduler - INFO - Receive client connection: Client-c611dc8b-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:11,742 - distributed.core - INFO - Starting established connection
-PASSED2022-08-26 14:05:11,915 - distributed.scheduler - INFO - Remove client Client-c611dc8b-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:11,916 - distributed.scheduler - INFO - Remove client Client-c611dc8b-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:11,916 - distributed.scheduler - INFO - Close client connection: Client-c611dc8b-2582-11ed-a99d-00d861bc4509
-
-distributed/tests/test_client.py::test_dump_cluster_state_sync[yaml-False] 2022-08-26 14:05:12,771 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:05:12,774 - distributed.scheduler - INFO - State start
-2022-08-26 14:05:12,777 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:05:12,777 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:35081
-2022-08-26 14:05:12,777 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:05:12,779 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-gk8f654u', purging
-2022-08-26 14:05:12,779 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-mrexdq6t', purging
-2022-08-26 14:05:12,785 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40861
-2022-08-26 14:05:12,785 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40861
-2022-08-26 14:05:12,785 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39609
-2022-08-26 14:05:12,785 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35081
-2022-08-26 14:05:12,786 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:12,786 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:05:12,786 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:05:12,786 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37769
-2022-08-26 14:05:12,786 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-j31hb9jh
-2022-08-26 14:05:12,786 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37769
-2022-08-26 14:05:12,786 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:12,786 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39705
-2022-08-26 14:05:12,786 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35081
-2022-08-26 14:05:12,786 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:12,786 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:05:12,786 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:05:12,786 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-807tufqj
-2022-08-26 14:05:12,786 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:13,048 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37769', status: init, memory: 0, processing: 0>
-2022-08-26 14:05:13,301 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37769
-2022-08-26 14:05:13,301 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:13,301 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35081
-2022-08-26 14:05:13,301 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:13,302 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40861', status: init, memory: 0, processing: 0>
-2022-08-26 14:05:13,302 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40861
-2022-08-26 14:05:13,302 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:13,302 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:13,302 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35081
-2022-08-26 14:05:13,303 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:13,303 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:13,308 - distributed.scheduler - INFO - Receive client connection: Client-c700f1fc-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:13,308 - distributed.core - INFO - Starting established connection
-PASSED2022-08-26 14:05:13,486 - distributed.scheduler - INFO - Remove client Client-c700f1fc-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:13,486 - distributed.scheduler - INFO - Remove client Client-c700f1fc-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:13,486 - distributed.scheduler - INFO - Close client connection: Client-c700f1fc-2582-11ed-a99d-00d861bc4509
-
-distributed/tests/test_client.py::test_dump_cluster_state_async[msgpack-True] 2022-08-26 14:05:13,739 - distributed.utils_perf - WARNING - full garbage collections took 75% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_dump_cluster_state_async[msgpack-False] 2022-08-26 14:05:13,980 - distributed.utils_perf - WARNING - full garbage collections took 75% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_dump_cluster_state_async[yaml-True] 2022-08-26 14:05:14,394 - distributed.utils_perf - WARNING - full garbage collections took 74% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_dump_cluster_state_async[yaml-False] 2022-08-26 14:05:14,807 - distributed.utils_perf - WARNING - full garbage collections took 73% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_dump_cluster_state_json[True] 2022-08-26 14:05:15,048 - distributed.utils_perf - WARNING - full garbage collections took 75% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_dump_cluster_state_json[False] 2022-08-26 14:05:15,081 - distributed.core - ERROR - Exception while handling op dump_cluster_state_to_url
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 770, in _handle_comm
-    result = await result
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/scheduler.py", line 3329, in dump_cluster_state_to_url
-    await cluster_dump.write_state(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/cluster_dump.py", line 57, in write_state
-    raise ValueError(
-ValueError: Unsupported format 'json'. Possible values are 'msgpack' or 'yaml'.
-2022-08-26 14:05:15,288 - distributed.utils_perf - WARNING - full garbage collections took 75% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_dump_cluster_state_exclude_default 2022-08-26 14:05:16,399 - distributed.utils_perf - WARNING - full garbage collections took 70% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::TestClientSecurityLoader::test_security_loader 2022-08-26 14:05:16,427 - distributed.scheduler - INFO - State start
-2022-08-26 14:05:16,428 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:05:16,429 - distributed.scheduler - INFO -   Scheduler at: tls://192.168.1.159:37291
-2022-08-26 14:05:16,429 - distributed.scheduler - INFO -   dashboard at:                    :43509
-2022-08-26 14:05:16,438 - distributed.scheduler - INFO - Receive client connection: Client-c8dd9d02-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:16,438 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:16,449 - distributed.scheduler - INFO - Remove client Client-c8dd9d02-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:16,449 - distributed.scheduler - INFO - Remove client Client-c8dd9d02-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:16,450 - distributed.scheduler - INFO - Close client connection: Client-c8dd9d02-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:16,450 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:05:16,450 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_client.py::TestClientSecurityLoader::test_security_loader_ignored_if_explicit_security_provided 2022-08-26 14:05:16,477 - distributed.scheduler - INFO - State start
-2022-08-26 14:05:16,479 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:05:16,480 - distributed.scheduler - INFO -   Scheduler at: tls://192.168.1.159:37547
-2022-08-26 14:05:16,480 - distributed.scheduler - INFO -   dashboard at:                    :40223
-2022-08-26 14:05:16,489 - distributed.scheduler - INFO - Receive client connection: Client-c8e56933-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:16,489 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:16,500 - distributed.scheduler - INFO - Remove client Client-c8e56933-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:16,500 - distributed.scheduler - INFO - Remove client Client-c8e56933-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:16,501 - distributed.scheduler - INFO - Close client connection: Client-c8e56933-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:16,501 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:05:16,501 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_client.py::TestClientSecurityLoader::test_security_loader_ignored_if_returns_none 2022-08-26 14:05:16,527 - distributed.scheduler - INFO - State start
-2022-08-26 14:05:16,528 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:05:16,529 - distributed.scheduler - INFO -   Scheduler at: tls://192.168.1.159:36453
-2022-08-26 14:05:16,529 - distributed.scheduler - INFO -   dashboard at:                    :42017
-2022-08-26 14:05:16,537 - distributed.scheduler - INFO - Receive client connection: Client-c8ecd7cf-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:16,538 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:16,549 - distributed.scheduler - INFO - Remove client Client-c8ecd7cf-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:16,549 - distributed.scheduler - INFO - Remove client Client-c8ecd7cf-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:16,549 - distributed.scheduler - INFO - Close client connection: Client-c8ecd7cf-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:16,550 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:05:16,550 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_client.py::TestClientSecurityLoader::test_security_loader_import_failed PASSED
-distributed/tests/test_client.py::test_benchmark_hardware SKIPPED (n...)
-distributed/tests/test_client.py::test_benchmark_hardware_no_workers 2022-08-26 14:05:16,583 - distributed.scheduler - INFO - State start
-2022-08-26 14:05:16,584 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:05:16,584 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:34863
-2022-08-26 14:05:16,585 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:36903
-2022-08-26 14:05:16,588 - distributed.scheduler - INFO - Receive client connection: Client-c8f55c4e-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:16,588 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:16,601 - distributed.scheduler - INFO - Remove client Client-c8f55c4e-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:16,601 - distributed.scheduler - INFO - Remove client Client-c8f55c4e-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:16,601 - distributed.scheduler - INFO - Close client connection: Client-c8f55c4e-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:16,601 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:05:16,602 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:05:16,798 - distributed.utils_perf - WARNING - full garbage collections took 69% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_wait_for_workers_updates_info 2022-08-26 14:05:16,804 - distributed.scheduler - INFO - State start
-2022-08-26 14:05:16,806 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:05:16,806 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:39433
-2022-08-26 14:05:16,806 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:46393
-2022-08-26 14:05:16,809 - distributed.scheduler - INFO - Receive client connection: Client-c91725ca-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:16,809 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:16,812 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35889
-2022-08-26 14:05:16,812 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35889
-2022-08-26 14:05:16,812 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45923
-2022-08-26 14:05:16,812 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39433
-2022-08-26 14:05:16,813 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:16,813 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:05:16,813 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:05:16,813 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-m30bla3k
-2022-08-26 14:05:16,813 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:16,815 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:35889', status: init, memory: 0, processing: 0>
-2022-08-26 14:05:16,815 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:35889
-2022-08-26 14:05:16,815 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:16,815 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39433
-2022-08-26 14:05:16,815 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:16,816 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:16,917 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:35889
-2022-08-26 14:05:16,918 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:35889', status: closing, memory: 0, processing: 0>
-2022-08-26 14:05:16,918 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:35889
-2022-08-26 14:05:16,918 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:05:16,918 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-c614a972-d9d7-468d-9a3d-3bf4b6483417 Address tcp://127.0.0.1:35889 Status: Status.closing
-2022-08-26 14:05:16,919 - distributed.scheduler - INFO - Remove client Client-c91725ca-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:16,919 - distributed.scheduler - INFO - Remove client Client-c91725ca-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:16,919 - distributed.scheduler - INFO - Close client connection: Client-c91725ca-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:16,920 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:05:16,920 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:05:17,114 - distributed.utils_perf - WARNING - full garbage collections took 69% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client.py::test_quiet_close_process[True] SKIPPED
-distributed/tests/test_client.py::test_quiet_close_process[False] SKIPPED
-distributed/tests/test_client.py::test_deprecated_loop_properties 2022-08-26 14:05:17,121 - distributed.scheduler - INFO - State start
-2022-08-26 14:05:17,123 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:05:17,123 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:43873
-2022-08-26 14:05:17,123 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:38847
-2022-08-26 14:05:17,127 - distributed.scheduler - INFO - Receive client connection: ExampleClient-c94796cc-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:17,127 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:17,138 - distributed.scheduler - INFO - Remove client ExampleClient-c94796cc-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:17,138 - distributed.scheduler - INFO - Remove client ExampleClient-c94796cc-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:17,139 - distributed.scheduler - INFO - Close client connection: ExampleClient-c94796cc-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:17,139 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:05:17,139 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:05:17,333 - distributed.utils_perf - WARNING - full garbage collections took 69% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_client_executor.py::test_submit 2022-08-26 14:05:18,179 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:05:18,182 - distributed.scheduler - INFO - State start
-2022-08-26 14:05:18,185 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:05:18,185 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:33741
-2022-08-26 14:05:18,185 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:05:18,196 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:41049
-2022-08-26 14:05:18,196 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:41049
-2022-08-26 14:05:18,196 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34673
-2022-08-26 14:05:18,196 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33741
-2022-08-26 14:05:18,196 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:18,196 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:05:18,196 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:05:18,196 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-jp7s4o6t
-2022-08-26 14:05:18,196 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:18,237 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:44631
-2022-08-26 14:05:18,237 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:44631
-2022-08-26 14:05:18,237 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35037
-2022-08-26 14:05:18,237 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33741
-2022-08-26 14:05:18,237 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:18,237 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:05:18,237 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:05:18,237 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-xxvz4_22
-2022-08-26 14:05:18,237 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:18,474 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:41049', status: init, memory: 0, processing: 0>
-2022-08-26 14:05:18,735 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:41049
-2022-08-26 14:05:18,736 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:18,736 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33741
-2022-08-26 14:05:18,736 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:18,736 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:44631', status: init, memory: 0, processing: 0>
-2022-08-26 14:05:18,737 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:18,737 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:44631
-2022-08-26 14:05:18,737 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:18,737 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33741
-2022-08-26 14:05:18,738 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:18,738 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:18,743 - distributed.scheduler - INFO - Receive client connection: Client-ca3e4464-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:18,743 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:18,853 - distributed.worker - WARNING - Compute Failed
-Key:       throws-bec8229af50568bbad1bd96d926313d3
-Function:  throws
-args:      ('foo')
-kwargs:    {}
-Exception: "RuntimeError('hello!')"
-
-PASSED2022-08-26 14:05:18,859 - distributed.scheduler - INFO - Remove client Client-ca3e4464-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:18,859 - distributed.scheduler - INFO - Remove client Client-ca3e4464-2582-11ed-a99d-00d861bc4509
-
-distributed/tests/test_client_executor.py::test_as_completed 2022-08-26 14:05:19,701 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:05:19,704 - distributed.scheduler - INFO - State start
-2022-08-26 14:05:19,707 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:05:19,707 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:41563
-2022-08-26 14:05:19,707 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:05:19,717 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-jp7s4o6t', purging
-2022-08-26 14:05:19,717 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-xxvz4_22', purging
-2022-08-26 14:05:19,723 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37721
-2022-08-26 14:05:19,723 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37721
-2022-08-26 14:05:19,723 - distributed.worker - INFO -          dashboard at:            127.0.0.1:37443
-2022-08-26 14:05:19,723 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:41563
-2022-08-26 14:05:19,723 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:19,723 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:05:19,724 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:05:19,724 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-pdzzrfeg
-2022-08-26 14:05:19,724 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:19,761 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34285
-2022-08-26 14:05:19,761 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34285
-2022-08-26 14:05:19,761 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36931
-2022-08-26 14:05:19,761 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:41563
-2022-08-26 14:05:19,761 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:19,761 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:05:19,761 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:05:19,761 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-pol7j6ol
-2022-08-26 14:05:19,761 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:20,006 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37721', status: init, memory: 0, processing: 0>
-2022-08-26 14:05:20,261 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37721
-2022-08-26 14:05:20,261 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:20,261 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:41563
-2022-08-26 14:05:20,261 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:20,262 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:34285', status: init, memory: 0, processing: 0>
-2022-08-26 14:05:20,262 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:34285
-2022-08-26 14:05:20,262 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:20,262 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:20,262 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:41563
-2022-08-26 14:05:20,263 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:20,263 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:20,268 - distributed.scheduler - INFO - Receive client connection: Client-cb26f345-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:20,269 - distributed.core - INFO - Starting established connection
-PASSED2022-08-26 14:05:20,413 - distributed.scheduler - INFO - Remove client Client-cb26f345-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:20,413 - distributed.scheduler - INFO - Remove client Client-cb26f345-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:20,413 - distributed.scheduler - INFO - Close client connection: Client-cb26f345-2582-11ed-a99d-00d861bc4509
-
-distributed/tests/test_client_executor.py::test_wait 2022-08-26 14:05:21,270 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:05:21,273 - distributed.scheduler - INFO - State start
-2022-08-26 14:05:21,276 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:05:21,276 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:43189
-2022-08-26 14:05:21,276 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:05:21,279 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-pol7j6ol', purging
-2022-08-26 14:05:21,279 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-pdzzrfeg', purging
-2022-08-26 14:05:21,286 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34991
-2022-08-26 14:05:21,286 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34991
-2022-08-26 14:05:21,286 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46281
-2022-08-26 14:05:21,286 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43189
-2022-08-26 14:05:21,286 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:21,286 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:05:21,286 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:05:21,286 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-0fze7mai
-2022-08-26 14:05:21,286 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:21,328 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43847
-2022-08-26 14:05:21,328 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43847
-2022-08-26 14:05:21,328 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33823
-2022-08-26 14:05:21,328 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43189
-2022-08-26 14:05:21,328 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:21,328 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:05:21,328 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:05:21,328 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-6m8dhpyz
-2022-08-26 14:05:21,328 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:21,562 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:34991', status: init, memory: 0, processing: 0>
-2022-08-26 14:05:21,818 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:34991
-2022-08-26 14:05:21,818 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:21,818 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43189
-2022-08-26 14:05:21,818 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:21,819 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:43847', status: init, memory: 0, processing: 0>
-2022-08-26 14:05:21,819 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:43847
-2022-08-26 14:05:21,819 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:21,819 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:21,819 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43189
-2022-08-26 14:05:21,819 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:21,820 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:21,825 - distributed.scheduler - INFO - Receive client connection: Client-cc148627-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:21,825 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:22,751 - distributed.worker - WARNING - Compute Failed
-Key:       throws-649d2626-b4d6-493d-8926-ff71ebfab9ee
-Function:  throws
-args:      (None)
-kwargs:    {}
-Exception: "RuntimeError('hello!')"
-
-PASSED2022-08-26 14:05:22,968 - distributed.scheduler - INFO - Remove client Client-cc148627-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:22,968 - distributed.scheduler - INFO - Remove client Client-cc148627-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:22,969 - distributed.scheduler - INFO - Close client connection: Client-cc148627-2582-11ed-a99d-00d861bc4509
-
-distributed/tests/test_client_executor.py::test_cancellation 2022-08-26 14:05:23,821 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:05:23,824 - distributed.scheduler - INFO - State start
-2022-08-26 14:05:23,827 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:05:23,827 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:38001
-2022-08-26 14:05:23,827 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:05:23,839 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-6m8dhpyz', purging
-2022-08-26 14:05:23,839 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-0fze7mai', purging
-2022-08-26 14:05:23,846 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:46419
-2022-08-26 14:05:23,846 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:46419
-2022-08-26 14:05:23,846 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46159
-2022-08-26 14:05:23,846 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:38001
-2022-08-26 14:05:23,846 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:23,846 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:05:23,846 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:05:23,846 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ia4xx7so
-2022-08-26 14:05:23,846 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:23,882 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37783
-2022-08-26 14:05:23,882 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37783
-2022-08-26 14:05:23,882 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35937
-2022-08-26 14:05:23,882 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:38001
-2022-08-26 14:05:23,882 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:23,882 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:05:23,882 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:05:23,882 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-0aulrw0g
-2022-08-26 14:05:23,882 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:24,127 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:46419', status: init, memory: 0, processing: 0>
-2022-08-26 14:05:24,380 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:46419
-2022-08-26 14:05:24,381 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:24,381 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:38001
-2022-08-26 14:05:24,381 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:24,381 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37783', status: init, memory: 0, processing: 0>
-2022-08-26 14:05:24,382 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37783
-2022-08-26 14:05:24,382 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:24,382 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:24,382 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:38001
-2022-08-26 14:05:24,382 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:24,383 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:24,388 - distributed.scheduler - INFO - Receive client connection: Client-cd9b9806-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:24,389 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:24,403 - distributed.scheduler - INFO - Client Client-cd9b9806-2582-11ed-a99d-00d861bc4509 requests to cancel 1 keys
-2022-08-26 14:05:24,403 - distributed.scheduler - INFO - Scheduler cancels key sleep-6831d175-10f9-4d88-b601-4abcc96f729d.  Force=False
-PASSED2022-08-26 14:05:24,412 - distributed.scheduler - INFO - Remove client Client-cd9b9806-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:24,412 - distributed.scheduler - INFO - Remove client Client-cd9b9806-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:24,412 - distributed.scheduler - INFO - Close client connection: Client-cd9b9806-2582-11ed-a99d-00d861bc4509
-
-distributed/tests/test_client_executor.py::test_cancellation_wait 2022-08-26 14:05:25,254 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:05:25,257 - distributed.scheduler - INFO - State start
-2022-08-26 14:05:25,260 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:05:25,260 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:45433
-2022-08-26 14:05:25,260 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:05:25,288 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-ia4xx7so', purging
-2022-08-26 14:05:25,288 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-0aulrw0g', purging
-2022-08-26 14:05:25,295 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:41013
-2022-08-26 14:05:25,295 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:41013
-2022-08-26 14:05:25,295 - distributed.worker - INFO -          dashboard at:            127.0.0.1:44835
-2022-08-26 14:05:25,295 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:45433
-2022-08-26 14:05:25,295 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:25,295 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:05:25,295 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:05:25,295 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-duh42s2x
-2022-08-26 14:05:25,295 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:25,325 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40395
-2022-08-26 14:05:25,325 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40395
-2022-08-26 14:05:25,325 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43777
-2022-08-26 14:05:25,325 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:45433
-2022-08-26 14:05:25,325 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:25,325 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:05:25,325 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:05:25,325 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-_2a8uqiy
-2022-08-26 14:05:25,325 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:25,574 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:41013', status: init, memory: 0, processing: 0>
-2022-08-26 14:05:25,830 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:41013
-2022-08-26 14:05:25,830 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:25,830 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:45433
-2022-08-26 14:05:25,830 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:25,831 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40395', status: init, memory: 0, processing: 0>
-2022-08-26 14:05:25,831 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40395
-2022-08-26 14:05:25,831 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:25,831 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:25,831 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:45433
-2022-08-26 14:05:25,831 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:25,832 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:25,837 - distributed.scheduler - INFO - Receive client connection: Client-ce78b3e1-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:25,838 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:25,846 - distributed.scheduler - INFO - Client Client-ce78b3e1-2582-11ed-a99d-00d861bc4509 requests to cancel 1 keys
-2022-08-26 14:05:25,946 - distributed.scheduler - INFO - Scheduler cancels key slowinc-de2a2299-5231-4c8d-a4ae-37a58d047469.  Force=False
-PASSED2022-08-26 14:05:26,865 - distributed.scheduler - INFO - Remove client Client-ce78b3e1-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:26,865 - distributed.scheduler - INFO - Remove client Client-ce78b3e1-2582-11ed-a99d-00d861bc4509
-
-distributed/tests/test_client_executor.py::test_cancellation_as_completed 2022-08-26 14:05:27,718 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:05:27,721 - distributed.scheduler - INFO - State start
-2022-08-26 14:05:27,724 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:05:27,724 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:44195
-2022-08-26 14:05:27,724 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:05:27,730 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-duh42s2x', purging
-2022-08-26 14:05:27,730 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-_2a8uqiy', purging
-2022-08-26 14:05:27,737 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40679
-2022-08-26 14:05:27,737 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40679
-2022-08-26 14:05:27,737 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42745
-2022-08-26 14:05:27,737 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44195
-2022-08-26 14:05:27,737 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:27,737 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:05:27,737 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:05:27,737 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-hxp44jr_
-2022-08-26 14:05:27,737 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:27,774 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:41873
-2022-08-26 14:05:27,774 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:41873
-2022-08-26 14:05:27,774 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35433
-2022-08-26 14:05:27,774 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44195
-2022-08-26 14:05:27,774 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:27,774 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:05:27,774 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:05:27,774 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-v_gw_oy9
-2022-08-26 14:05:27,774 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:28,015 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40679', status: init, memory: 0, processing: 0>
-2022-08-26 14:05:28,270 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40679
-2022-08-26 14:05:28,270 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:28,271 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44195
-2022-08-26 14:05:28,271 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:28,271 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:41873', status: init, memory: 0, processing: 0>
-2022-08-26 14:05:28,272 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:41873
-2022-08-26 14:05:28,272 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:28,272 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:28,272 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44195
-2022-08-26 14:05:28,272 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:28,273 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:28,278 - distributed.scheduler - INFO - Receive client connection: Client-cfed18e8-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:28,278 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:28,285 - distributed.scheduler - INFO - Client Client-cfed18e8-2582-11ed-a99d-00d861bc4509 requests to cancel 1 keys
-2022-08-26 14:05:28,387 - distributed.scheduler - INFO - Scheduler cancels key slowinc-c8a59dce-5eab-4b7d-b85a-b7047c03e615.  Force=False
-2022-08-26 14:05:28,388 - distributed.scheduler - INFO - Client Client-cfed18e8-2582-11ed-a99d-00d861bc4509 requests to cancel 1 keys
-2022-08-26 14:05:28,388 - distributed.scheduler - INFO - Scheduler cancels key slowinc-bd849e14-a006-4fde-ba9d-6608cb445a0e.  Force=False
-PASSED2022-08-26 14:05:29,114 - distributed.scheduler - INFO - Remove client Client-cfed18e8-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:29,114 - distributed.scheduler - INFO - Remove client Client-cfed18e8-2582-11ed-a99d-00d861bc4509
-
-distributed/tests/test_client_executor.py::test_map SKIPPED (need --...)
-distributed/tests/test_client_executor.py::test_pure 2022-08-26 14:05:29,962 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:05:29,965 - distributed.scheduler - INFO - State start
-2022-08-26 14:05:29,968 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:05:29,968 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:41103
-2022-08-26 14:05:29,968 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:05:29,980 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-v_gw_oy9', purging
-2022-08-26 14:05:29,980 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-hxp44jr_', purging
-2022-08-26 14:05:29,987 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42153
-2022-08-26 14:05:29,987 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42153
-2022-08-26 14:05:29,987 - distributed.worker - INFO -          dashboard at:            127.0.0.1:38949
-2022-08-26 14:05:29,987 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:41103
-2022-08-26 14:05:29,987 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:29,987 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:05:29,987 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:05:29,987 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-s44q62u7
-2022-08-26 14:05:29,987 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:30,033 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33217
-2022-08-26 14:05:30,033 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33217
-2022-08-26 14:05:30,033 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46451
-2022-08-26 14:05:30,033 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:41103
-2022-08-26 14:05:30,033 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:30,033 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:05:30,033 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:05:30,033 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-zov3wqij
-2022-08-26 14:05:30,034 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:30,266 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42153', status: init, memory: 0, processing: 0>
-2022-08-26 14:05:30,523 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42153
-2022-08-26 14:05:30,523 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:30,523 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:41103
-2022-08-26 14:05:30,523 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:30,524 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33217', status: init, memory: 0, processing: 0>
-2022-08-26 14:05:30,524 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:30,524 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33217
-2022-08-26 14:05:30,524 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:30,524 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:41103
-2022-08-26 14:05:30,525 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:30,525 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:30,530 - distributed.scheduler - INFO - Receive client connection: Client-d144cee2-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:30,531 - distributed.core - INFO - Starting established connection
-PASSED2022-08-26 14:05:30,610 - distributed.scheduler - INFO - Remove client Client-d144cee2-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:30,611 - distributed.scheduler - INFO - Remove client Client-d144cee2-2582-11ed-a99d-00d861bc4509
-
-distributed/tests/test_client_executor.py::test_workers 2022-08-26 14:05:31,461 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:05:31,463 - distributed.scheduler - INFO - State start
-2022-08-26 14:05:31,466 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:05:31,467 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:38587
-2022-08-26 14:05:31,467 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:05:31,469 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-s44q62u7', purging
-2022-08-26 14:05:31,469 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-zov3wqij', purging
-2022-08-26 14:05:31,476 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42239
-2022-08-26 14:05:31,476 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42239
-2022-08-26 14:05:31,476 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36793
-2022-08-26 14:05:31,476 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:38587
-2022-08-26 14:05:31,476 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:31,476 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:05:31,476 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:05:31,476 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-68sxlym5
-2022-08-26 14:05:31,476 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:31,518 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37551
-2022-08-26 14:05:31,518 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37551
-2022-08-26 14:05:31,518 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36855
-2022-08-26 14:05:31,518 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:38587
-2022-08-26 14:05:31,518 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:31,518 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:05:31,518 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:05:31,518 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-d322ndvk
-2022-08-26 14:05:31,519 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:31,754 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42239', status: init, memory: 0, processing: 0>
-2022-08-26 14:05:32,010 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42239
-2022-08-26 14:05:32,010 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:32,010 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:38587
-2022-08-26 14:05:32,011 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:32,011 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37551', status: init, memory: 0, processing: 0>
-2022-08-26 14:05:32,011 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37551
-2022-08-26 14:05:32,011 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:32,012 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:32,012 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:38587
-2022-08-26 14:05:32,012 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:32,012 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:32,018 - distributed.scheduler - INFO - Receive client connection: Client-d227c241-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:32,018 - distributed.core - INFO - Starting established connection
-PASSED2022-08-26 14:05:32,267 - distributed.scheduler - INFO - Remove client Client-d227c241-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:32,268 - distributed.scheduler - INFO - Remove client Client-d227c241-2582-11ed-a99d-00d861bc4509
-
-distributed/tests/test_client_executor.py::test_unsupported_arguments 2022-08-26 14:05:33,121 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:05:33,124 - distributed.scheduler - INFO - State start
-2022-08-26 14:05:33,127 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:05:33,127 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:35737
-2022-08-26 14:05:33,127 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:05:33,130 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-d322ndvk', purging
-2022-08-26 14:05:33,130 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-68sxlym5', purging
-2022-08-26 14:05:33,137 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40875
-2022-08-26 14:05:33,137 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40875
-2022-08-26 14:05:33,137 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34165
-2022-08-26 14:05:33,137 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35737
-2022-08-26 14:05:33,137 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:33,137 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:05:33,137 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:05:33,137 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ojjntupr
-2022-08-26 14:05:33,137 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:33,176 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39657
-2022-08-26 14:05:33,177 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39657
-2022-08-26 14:05:33,177 - distributed.worker - INFO -          dashboard at:            127.0.0.1:38125
-2022-08-26 14:05:33,177 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35737
-2022-08-26 14:05:33,177 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:33,177 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:05:33,177 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:05:33,177 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-0cz4hhow
-2022-08-26 14:05:33,177 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:33,419 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40875', status: init, memory: 0, processing: 0>
-2022-08-26 14:05:33,673 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40875
-2022-08-26 14:05:33,673 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:33,673 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35737
-2022-08-26 14:05:33,673 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:33,674 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39657', status: init, memory: 0, processing: 0>
-2022-08-26 14:05:33,674 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39657
-2022-08-26 14:05:33,674 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:33,674 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:33,674 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35737
-2022-08-26 14:05:33,675 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:33,675 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:33,681 - distributed.scheduler - INFO - Receive client connection: Client-d32577b7-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:33,681 - distributed.core - INFO - Starting established connection
-PASSED2022-08-26 14:05:33,693 - distributed.scheduler - INFO - Remove client Client-d32577b7-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:33,693 - distributed.scheduler - INFO - Remove client Client-d32577b7-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:33,693 - distributed.scheduler - INFO - Close client connection: Client-d32577b7-2582-11ed-a99d-00d861bc4509
-
-distributed/tests/test_client_executor.py::test_retries 2022-08-26 14:05:34,547 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:05:34,550 - distributed.scheduler - INFO - State start
-2022-08-26 14:05:34,552 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:05:34,553 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:32825
-2022-08-26 14:05:34,553 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:05:34,562 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-ojjntupr', purging
-2022-08-26 14:05:34,562 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-0cz4hhow', purging
-2022-08-26 14:05:34,569 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33539
-2022-08-26 14:05:34,569 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33539
-2022-08-26 14:05:34,569 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35155
-2022-08-26 14:05:34,569 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:32825
-2022-08-26 14:05:34,569 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:34,569 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:05:34,569 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:05:34,569 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-8al7zzlx
-2022-08-26 14:05:34,569 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:34,606 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37729
-2022-08-26 14:05:34,606 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37729
-2022-08-26 14:05:34,606 - distributed.worker - INFO -          dashboard at:            127.0.0.1:44535
-2022-08-26 14:05:34,606 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:32825
-2022-08-26 14:05:34,606 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:34,606 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:05:34,606 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:05:34,606 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-yq6fdffy
-2022-08-26 14:05:34,606 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:34,851 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33539', status: init, memory: 0, processing: 0>
-2022-08-26 14:05:35,107 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33539
-2022-08-26 14:05:35,108 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:35,108 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:32825
-2022-08-26 14:05:35,108 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:35,108 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37729', status: init, memory: 0, processing: 0>
-2022-08-26 14:05:35,109 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37729
-2022-08-26 14:05:35,109 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:35,109 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:35,109 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:32825
-2022-08-26 14:05:35,109 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:35,110 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:35,115 - distributed.scheduler - INFO - Receive client connection: Client-d400583c-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:35,115 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:35,212 - distributed.worker - WARNING - Compute Failed
-Key:       func-a2762ec9-7a05-47ca-971c-64e5c450a2e7
-Function:  func
-args:      ()
-kwargs:    {}
-Exception: "ZeroDivisionError('one')"
-
-2022-08-26 14:05:35,296 - distributed.worker - WARNING - Compute Failed
-Key:       func-a2762ec9-7a05-47ca-971c-64e5c450a2e7
-Function:  func
-args:      ()
-kwargs:    {}
-Exception: "ZeroDivisionError('one')"
-
-2022-08-26 14:05:35,299 - distributed.worker - WARNING - Compute Failed
-Key:       func-a2762ec9-7a05-47ca-971c-64e5c450a2e7
-Function:  func
-args:      ()
-kwargs:    {}
-Exception: "ZeroDivisionError('two')"
-
-2022-08-26 14:05:35,304 - distributed.worker - WARNING - Compute Failed
-Key:       func-a2762ec9-7a05-47ca-971c-64e5c450a2e7
-Function:  func
-args:      ()
-kwargs:    {}
-Exception: "ZeroDivisionError('two')"
-
-2022-08-26 14:05:35,318 - distributed.worker - WARNING - Compute Failed
-Key:       func-270059855356bedd5ebdd7997ba506a7
-Function:  func
-args:      ()
-kwargs:    {}
-Exception: "ZeroDivisionError('one')"
-
-2022-08-26 14:05:35,319 - distributed.worker - WARNING - Compute Failed
-Key:       func-270059855356bedd5ebdd7997ba506a7
-Function:  func
-args:      ()
-kwargs:    {}
-Exception: "ZeroDivisionError('one')"
-
-2022-08-26 14:05:35,323 - distributed.worker - WARNING - Compute Failed
-Key:       func-270059855356bedd5ebdd7997ba506a7
-Function:  func
-args:      ()
-kwargs:    {}
-Exception: "ZeroDivisionError('two')"
-
-2022-08-26 14:05:35,330 - distributed.worker - WARNING - Compute Failed
-Key:       func-270059855356bedd5ebdd7997ba506a7
-Function:  func
-args:      ()
-kwargs:    {}
-Exception: "ZeroDivisionError('two')"
-
-2022-08-26 14:05:35,341 - distributed.worker - WARNING - Compute Failed
-Key:       func-9e7d61faa867ca4540c4ae8ac9682a71
-Function:  func
-args:      ()
-kwargs:    {}
-Exception: "ZeroDivisionError('one')"
-
-2022-08-26 14:05:35,342 - distributed.worker - WARNING - Compute Failed
-Key:       func-9e7d61faa867ca4540c4ae8ac9682a71
-Function:  func
-args:      ()
-kwargs:    {}
-Exception: "ZeroDivisionError('one')"
-
-2022-08-26 14:05:35,346 - distributed.worker - WARNING - Compute Failed
-Key:       func-9e7d61faa867ca4540c4ae8ac9682a71
-Function:  func
-args:      ()
-kwargs:    {}
-Exception: "ZeroDivisionError('two')"
-
-2022-08-26 14:05:35,354 - distributed.worker - WARNING - Compute Failed
-Key:       func-3a8fb3c9fd785ae4372e3e9072ed27cc
-Function:  func
-args:      ()
-kwargs:    {}
-Exception: "ZeroDivisionError('one')"
-
-PASSED2022-08-26 14:05:35,361 - distributed.scheduler - INFO - Remove client Client-d400583c-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:35,361 - distributed.scheduler - INFO - Remove client Client-d400583c-2582-11ed-a99d-00d861bc4509
-
-distributed/tests/test_client_executor.py::test_shutdown_wait 2022-08-26 14:05:36,205 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:05:36,208 - distributed.scheduler - INFO - State start
-2022-08-26 14:05:36,211 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:05:36,211 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:39863
-2022-08-26 14:05:36,211 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:05:36,217 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-8al7zzlx', purging
-2022-08-26 14:05:36,217 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-yq6fdffy', purging
-2022-08-26 14:05:36,223 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39639
-2022-08-26 14:05:36,224 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39639
-2022-08-26 14:05:36,224 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35557
-2022-08-26 14:05:36,224 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39863
-2022-08-26 14:05:36,224 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:36,224 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:05:36,224 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:05:36,224 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-js87y5cf
-2022-08-26 14:05:36,224 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:36,269 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35273
-2022-08-26 14:05:36,269 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35273
-2022-08-26 14:05:36,269 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43645
-2022-08-26 14:05:36,269 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39863
-2022-08-26 14:05:36,269 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:36,269 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:05:36,269 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:05:36,269 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-f2_iig8l
-2022-08-26 14:05:36,269 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:36,501 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39639', status: init, memory: 0, processing: 0>
-2022-08-26 14:05:36,756 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39639
-2022-08-26 14:05:36,757 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:36,757 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39863
-2022-08-26 14:05:36,757 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:36,757 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:35273', status: init, memory: 0, processing: 0>
-2022-08-26 14:05:36,758 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:35273
-2022-08-26 14:05:36,758 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:36,758 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:36,758 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39863
-2022-08-26 14:05:36,758 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:36,759 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:36,766 - distributed.scheduler - INFO - Receive client connection: Client-d4fbf6e2-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:36,766 - distributed.core - INFO - Starting established connection
-PASSED2022-08-26 14:05:37,896 - distributed.scheduler - INFO - Remove client Client-d4fbf6e2-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:37,896 - distributed.scheduler - INFO - Remove client Client-d4fbf6e2-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:37,896 - distributed.scheduler - INFO - Close client connection: Client-d4fbf6e2-2582-11ed-a99d-00d861bc4509
-
-distributed/tests/test_client_executor.py::test_shutdown_nowait 2022-08-26 14:05:38,745 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:05:38,748 - distributed.scheduler - INFO - State start
-2022-08-26 14:05:38,751 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:05:38,751 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:39915
-2022-08-26 14:05:38,751 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:05:38,762 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-js87y5cf', purging
-2022-08-26 14:05:38,762 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-f2_iig8l', purging
-2022-08-26 14:05:38,768 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36043
-2022-08-26 14:05:38,768 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36043
-2022-08-26 14:05:38,768 - distributed.worker - INFO -          dashboard at:            127.0.0.1:38997
-2022-08-26 14:05:38,768 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39915
-2022-08-26 14:05:38,768 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:38,769 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:05:38,769 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:05:38,769 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ytzwnr_c
-2022-08-26 14:05:38,769 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:38,821 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:46249
-2022-08-26 14:05:38,822 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:46249
-2022-08-26 14:05:38,822 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33223
-2022-08-26 14:05:38,822 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39915
-2022-08-26 14:05:38,822 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:38,822 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:05:38,822 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:05:38,822 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-2kvi7w67
-2022-08-26 14:05:38,822 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:39,050 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36043', status: init, memory: 0, processing: 0>
-2022-08-26 14:05:39,306 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36043
-2022-08-26 14:05:39,306 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:39,306 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39915
-2022-08-26 14:05:39,306 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:39,307 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:46249', status: init, memory: 0, processing: 0>
-2022-08-26 14:05:39,307 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:39,307 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:46249
-2022-08-26 14:05:39,307 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:39,308 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39915
-2022-08-26 14:05:39,308 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:39,309 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:39,314 - distributed.scheduler - INFO - Receive client connection: Client-d68120be-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:39,315 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:39,317 - distributed.scheduler - INFO - Client Client-d68120be-2582-11ed-a99d-00d861bc4509 requests to cancel 1 keys
-2022-08-26 14:05:39,418 - distributed.scheduler - INFO - Scheduler cancels key sleep-0a31544f7023add229cd965572566686.  Force=False
-PASSED2022-08-26 14:05:39,530 - distributed.scheduler - INFO - Remove client Client-d68120be-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:39,530 - distributed.scheduler - INFO - Remove client Client-d68120be-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:39,530 - distributed.scheduler - INFO - Close client connection: Client-d68120be-2582-11ed-a99d-00d861bc4509
-
-distributed/tests/test_client_loop.py::test_close_loop_sync_start_new_loop FAILED
-distributed/tests/test_client_loop.py::test_close_loop_sync_start_new_loop ERROR
-distributed/tests/test_client_loop.py::test_close_loop_sync_use_running_loop FAILED
-distributed/tests/test_cluster_dump.py::test_tuple_to_list[input0-expected0] PASSED
-distributed/tests/test_cluster_dump.py::test_tuple_to_list[input1-expected1] PASSED
-distributed/tests/test_cluster_dump.py::test_tuple_to_list[input2-expected2] PASSED
-distributed/tests/test_cluster_dump.py::test_tuple_to_list[foo-foo] PASSED
-distributed/tests/test_cluster_dump.py::test_write_state_msgpack PASSED
-distributed/tests/test_cluster_dump.py::test_write_state_yaml PASSED
-distributed/tests/test_cluster_dump.py::test_cluster_dump_state 2022-08-26 14:05:39,885 - distributed.core - ERROR - 
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/utils.py", line 799, in wrapper
-    return await func(*args, **kwargs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/event.py", line 78, in event_wait
-    await future
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/locks.py", line 214, in wait
-    await fut
-asyncio.exceptions.CancelledError
-2022-08-26 14:05:40,080 - distributed.utils_perf - WARNING - full garbage collections took 65% CPU time recently (threshold: 10%)
-FAILED
-distributed/tests/test_cluster_dump.py::test_cluster_dump_story 2022-08-26 14:05:40,382 - distributed.utils_perf - WARNING - full garbage collections took 65% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_cluster_dump.py::test_cluster_dump_to_yamls 2022-08-26 14:05:40,728 - distributed.utils_perf - WARNING - full garbage collections took 66% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_collections.py::test_lru PASSED
-distributed/tests/test_collections.py::test_heapset PASSED
-distributed/tests/test_collections.py::test_heapset_popright[False] PASSED
-distributed/tests/test_collections.py::test_heapset_popright[True] PASSED
-distributed/tests/test_collections.py::test_heapset_pickle PASSED
-distributed/tests/test_config.py::test_logging_default 
-== Loggers (name, level, effective level, propagate) ==
-<root>                                   ERROR    ERROR    True 
-PIL.Image                                NOTSET   ERROR    True 
-PIL.PngImagePlugin                       NOTSET   ERROR    True 
-aiohttp.access                           NOTSET   ERROR    True 
-aiohttp.client                           NOTSET   ERROR    True 
-aiohttp.internal                         NOTSET   ERROR    True 
-aiohttp.server                           NOTSET   ERROR    True 
-aiohttp.web                              NOTSET   ERROR    True 
-aiohttp.websocket                        NOTSET   ERROR    True 
-asyncio                                  NOTSET   ERROR    True 
-asyncio.events                           NOTSET   ERROR    True 
-bokeh                                    ERROR    ERROR    False
-bokeh.application                        NOTSET   ERROR    True 
-bokeh.application.application            NOTSET   ERROR    True 
-bokeh.application.handlers               NOTSET   ERROR    True 
-bokeh.application.handlers.code          NOTSET   ERROR    True 
-bokeh.application.handlers.code_runner   NOTSET   ERROR    True 
-bokeh.application.handlers.directory     NOTSET   ERROR    True 
-bokeh.application.handlers.document_lifecycle NOTSET   ERROR    True 
-bokeh.application.handlers.function      NOTSET   ERROR    True 
-bokeh.application.handlers.handler       NOTSET   ERROR    True 
-bokeh.application.handlers.lifecycle     NOTSET   ERROR    True 
-bokeh.application.handlers.notebook      NOTSET   ERROR    True 
-bokeh.application.handlers.request_handler NOTSET   ERROR    True 
-bokeh.application.handlers.script        NOTSET   ERROR    True 
-bokeh.application.handlers.server_lifecycle NOTSET   ERROR    True 
-bokeh.application.handlers.server_request_handler NOTSET   ERROR    True 
-bokeh.colors                             NOTSET   ERROR    True 
-bokeh.colors.color                       NOTSET   ERROR    True 
-bokeh.colors.groups                      NOTSET   ERROR    True 
-bokeh.colors.hsl                         NOTSET   ERROR    True 
-bokeh.colors.named                       NOTSET   ERROR    True 
-bokeh.colors.rgb                         NOTSET   ERROR    True 
-bokeh.colors.util                        NOTSET   ERROR    True 
-bokeh.core.enums                         NOTSET   ERROR    True 
-bokeh.core.has_props                     NOTSET   ERROR    True 
-bokeh.core.json_encoder                  NOTSET   ERROR    True 
-bokeh.core.properties                    NOTSET   ERROR    True 
-bokeh.core.property                      NOTSET   ERROR    True 
-bokeh.core.property._sphinx              NOTSET   ERROR    True 
-bokeh.core.property.alias                NOTSET   ERROR    True 
-bokeh.core.property.any                  NOTSET   ERROR    True 
-bokeh.core.property.auto                 NOTSET   ERROR    True 
-bokeh.core.property.bases                NOTSET   ERROR    True 
-bokeh.core.property.color                NOTSET   ERROR    True 
-bokeh.core.property.container            NOTSET   ERROR    True 
-bokeh.core.property.dataspec             NOTSET   ERROR    True 
-bokeh.core.property.datetime             NOTSET   ERROR    True 
-bokeh.core.property.descriptor_factory   NOTSET   ERROR    True 
-bokeh.core.property.descriptors          NOTSET   ERROR    True 
-bokeh.core.property.either               NOTSET   ERROR    True 
-bokeh.core.property.enum                 NOTSET   ERROR    True 
-bokeh.core.property.factors              NOTSET   ERROR    True 
-bokeh.core.property.include              NOTSET   ERROR    True 
-bokeh.core.property.instance             NOTSET   ERROR    True 
-bokeh.core.property.json                 NOTSET   ERROR    True 
-bokeh.core.property.nullable             NOTSET   ERROR    True 
-bokeh.core.property.numeric              NOTSET   ERROR    True 
-bokeh.core.property.override             NOTSET   ERROR    True 
-bokeh.core.property.pandas               NOTSET   ERROR    True 
-bokeh.core.property.primitive            NOTSET   ERROR    True 
-bokeh.core.property.readonly             NOTSET   ERROR    True 
-bokeh.core.property.singletons           NOTSET   ERROR    True 
-bokeh.core.property.string               NOTSET   ERROR    True 
-bokeh.core.property.struct               NOTSET   ERROR    True 
-bokeh.core.property.text_like            NOTSET   ERROR    True 
-bokeh.core.property.validation           NOTSET   ERROR    True 
-bokeh.core.property.visual               NOTSET   ERROR    True 
-bokeh.core.property.wrappers             NOTSET   ERROR    True 
-bokeh.core.property_mixins               NOTSET   ERROR    True 
-bokeh.core.query                         NOTSET   ERROR    True 
-bokeh.core.templates                     NOTSET   ERROR    True 
-bokeh.core.types                         NOTSET   ERROR    True 
-bokeh.core.validation                    NOTSET   ERROR    True 
-bokeh.core.validation.check              NOTSET   ERROR    True 
-bokeh.core.validation.decorators         NOTSET   ERROR    True 
-bokeh.core.validation.errors             NOTSET   ERROR    True 
-bokeh.core.validation.issue              NOTSET   ERROR    True 
-bokeh.core.validation.warnings           NOTSET   ERROR    True 
-bokeh.document                           NOTSET   ERROR    True 
-bokeh.document.callbacks                 NOTSET   ERROR    True 
-bokeh.document.document                  NOTSET   ERROR    True 
-bokeh.document.events                    NOTSET   ERROR    True 
-bokeh.document.json                      NOTSET   ERROR    True 
-bokeh.document.locking                   NOTSET   ERROR    True 
-bokeh.document.models                    NOTSET   ERROR    True 
-bokeh.document.modules                   NOTSET   ERROR    True 
-bokeh.document.util                      NOTSET   ERROR    True 
-bokeh.embed                              NOTSET   ERROR    True 
-bokeh.embed.bundle                       NOTSET   ERROR    True 
-bokeh.embed.elements                     NOTSET   ERROR    True 
-bokeh.embed.server                       NOTSET   ERROR    True 
-bokeh.embed.standalone                   NOTSET   ERROR    True 
-bokeh.embed.util                         NOTSET   ERROR    True 
-bokeh.embed.wrappers                     NOTSET   ERROR    True 
-bokeh.events                             NOTSET   ERROR    True 
-bokeh.io                                 NOTSET   ERROR    True 
-bokeh.io.doc                             NOTSET   ERROR    True 
-bokeh.io.export                          NOTSET   ERROR    True 
-bokeh.io.notebook                        NOTSET   ERROR    True 
-bokeh.io.output                          NOTSET   ERROR    True 
-bokeh.io.saving                          NOTSET   ERROR    True 
-bokeh.io.showing                         NOTSET   ERROR    True 
-bokeh.io.state                           NOTSET   ERROR    True 
-bokeh.io.util                            NOTSET   ERROR    True 
-bokeh.layouts                            NOTSET   ERROR    True 
-bokeh.model                              NOTSET   ERROR    True 
-bokeh.model.data_model                   NOTSET   ERROR    True 
-bokeh.model.docs                         NOTSET   ERROR    True 
-bokeh.model.model                        NOTSET   ERROR    True 
-bokeh.model.util                         NOTSET   ERROR    True 
-bokeh.models                             NOTSET   ERROR    True 
-bokeh.models.annotations                 NOTSET   ERROR    True 
-bokeh.models.arrow_heads                 NOTSET   ERROR    True 
-bokeh.models.axes                        NOTSET   ERROR    True 
-bokeh.models.callbacks                   NOTSET   ERROR    True 
-bokeh.models.canvas                      NOTSET   ERROR    True 
-bokeh.models.css                         NOTSET   ERROR    True 
-bokeh.models.dom                         NOTSET   ERROR    True 
-bokeh.models.expressions                 NOTSET   ERROR    True 
-bokeh.models.filters                     NOTSET   ERROR    True 
-bokeh.models.formatters                  NOTSET   ERROR    True 
-bokeh.models.glyph                       NOTSET   ERROR    True 
-bokeh.models.glyphs                      NOTSET   ERROR    True 
-bokeh.models.graphs                      NOTSET   ERROR    True 
-bokeh.models.grids                       NOTSET   ERROR    True 
-bokeh.models.labeling                    NOTSET   ERROR    True 
-bokeh.models.layouts                     NOTSET   ERROR    True 
-bokeh.models.map_plots                   NOTSET   ERROR    True 
-bokeh.models.mappers                     NOTSET   ERROR    True 
-bokeh.models.markers                     NOTSET   ERROR    True 
-bokeh.models.plots                       NOTSET   ERROR    True 
-bokeh.models.ranges                      NOTSET   ERROR    True 
-bokeh.models.renderers                   NOTSET   ERROR    True 
-bokeh.models.scales                      NOTSET   ERROR    True 
-bokeh.models.selections                  NOTSET   ERROR    True 
-bokeh.models.sources                     NOTSET   ERROR    True 
-bokeh.models.text                        NOTSET   ERROR    True 
-bokeh.models.textures                    NOTSET   ERROR    True 
-bokeh.models.tickers                     NOTSET   ERROR    True 
-bokeh.models.tiles                       NOTSET   ERROR    True 
-bokeh.models.tools                       NOTSET   ERROR    True 
-bokeh.models.transforms                  NOTSET   ERROR    True 
-bokeh.models.widgets                     NOTSET   ERROR    True 
-bokeh.models.widgets.buttons             NOTSET   ERROR    True 
-bokeh.models.widgets.groups              NOTSET   ERROR    True 
-bokeh.models.widgets.icons               NOTSET   ERROR    True 
-bokeh.models.widgets.inputs              NOTSET   ERROR    True 
-bokeh.models.widgets.markups             NOTSET   ERROR    True 
-bokeh.models.widgets.panels              NOTSET   ERROR    True 
-bokeh.models.widgets.sliders             NOTSET   ERROR    True 
-bokeh.models.widgets.tables              NOTSET   ERROR    True 
-bokeh.models.widgets.widget              NOTSET   ERROR    True 
-bokeh.palettes                           NOTSET   ERROR    True 
-bokeh.plotting                           NOTSET   ERROR    True 
-bokeh.plotting._decorators               NOTSET   ERROR    True 
-bokeh.plotting._docstring                NOTSET   ERROR    True 
-bokeh.plotting._graph                    NOTSET   ERROR    True 
-bokeh.plotting._legends                  NOTSET   ERROR    True 
-bokeh.plotting._plot                     NOTSET   ERROR    True 
-bokeh.plotting._renderer                 NOTSET   ERROR    True 
-bokeh.plotting._stack                    NOTSET   ERROR    True 
-bokeh.plotting._tools                    NOTSET   ERROR    True 
-bokeh.plotting.figure                    NOTSET   ERROR    True 
-bokeh.plotting.glyph_api                 NOTSET   ERROR    True 
-bokeh.plotting.gmap                      NOTSET   ERROR    True 
-bokeh.plotting.graph                     NOTSET   ERROR    True 
-bokeh.protocol                           NOTSET   ERROR    True 
-bokeh.protocol.exceptions                NOTSET   ERROR    True 
-bokeh.protocol.message                   NOTSET   ERROR    True 
-bokeh.protocol.messages                  NOTSET   ERROR    True 
-bokeh.protocol.messages.ack              NOTSET   ERROR    True 
-bokeh.protocol.messages.error            NOTSET   ERROR    True 
-bokeh.protocol.messages.ok               NOTSET   ERROR    True 
-bokeh.protocol.messages.patch_doc        NOTSET   ERROR    True 
-bokeh.protocol.messages.pull_doc_reply   NOTSET   ERROR    True 
-bokeh.protocol.messages.pull_doc_req     NOTSET   ERROR    True 
-bokeh.protocol.messages.push_doc         NOTSET   ERROR    True 
-bokeh.protocol.messages.server_info_reply NOTSET   ERROR    True 
-bokeh.protocol.messages.server_info_req  NOTSET   ERROR    True 
-bokeh.protocol.receiver                  NOTSET   ERROR    True 
-bokeh.resources                          NOTSET   ERROR    True 
-bokeh.sampledata                         NOTSET   ERROR    True 
-bokeh.server.auth_provider               NOTSET   ERROR    True 
-bokeh.server.callbacks                   NOTSET   ERROR    True 
-bokeh.server.connection                  NOTSET   ERROR    True 
-bokeh.server.contexts                    NOTSET   ERROR    True 
-bokeh.server.protocol_handler            NOTSET   ERROR    True 
-bokeh.server.server                      NOTSET   ERROR    True 
-bokeh.server.session                     NOTSET   ERROR    True 
-bokeh.server.tornado                     NOTSET   ERROR    True 
-bokeh.server.urls                        NOTSET   ERROR    True 
-bokeh.server.util                        NOTSET   ERROR    True 
-bokeh.server.views.auth_mixin            NOTSET   ERROR    True 
-bokeh.server.views.autoload_js_handler   NOTSET   ERROR    True 
-bokeh.server.views.doc_handler           NOTSET   ERROR    True 
-bokeh.server.views.metadata_handler      NOTSET   ERROR    True 
-bokeh.server.views.multi_root_static_handler NOTSET   ERROR    True 
-bokeh.server.views.root_handler          NOTSET   ERROR    True 
-bokeh.server.views.session_handler       NOTSET   ERROR    True 
-bokeh.server.views.static_handler        NOTSET   ERROR    True 
-bokeh.server.views.ws                    NOTSET   ERROR    True 
-bokeh.settings                           NOTSET   ERROR    True 
-bokeh.themes                             NOTSET   ERROR    True 
-bokeh.themes.theme                       NOTSET   ERROR    True 
-bokeh.transform                          NOTSET   ERROR    True 
-bokeh.util.browser                       NOTSET   ERROR    True 
-bokeh.util.callback_manager              NOTSET   ERROR    True 
-bokeh.util.compiler                      NOTSET   ERROR    True 
-bokeh.util.dataclasses                   NOTSET   ERROR    True 
-bokeh.util.datatypes                     NOTSET   ERROR    True 
-bokeh.util.dependencies                  NOTSET   ERROR    True 
-bokeh.util.deprecation                   NOTSET   ERROR    True 
-bokeh.util.functions                     NOTSET   ERROR    True 
-bokeh.util.logconfig                     NOTSET   ERROR    True 
-bokeh.util.options                       NOTSET   ERROR    True 
-bokeh.util.paths                         NOTSET   ERROR    True 
-bokeh.util.sampledata                    NOTSET   ERROR    True 
-bokeh.util.serialization                 NOTSET   ERROR    True 
-bokeh.util.string                        NOTSET   ERROR    True 
-bokeh.util.token                         NOTSET   ERROR    True 
-bokeh.util.tornado                       NOTSET   ERROR    True 
-bokeh.util.version                       NOTSET   ERROR    True 
-bokeh.util.warnings                      NOTSET   ERROR    True 
-charset_normalizer                       NOTSET   ERROR    True 
-concurrent.futures                       NOTSET   ERROR    True 
-dask.dataframe.shuffle                   NOTSET   ERROR    True 
-dask.sizeof                              NOTSET   ERROR    True 
-distributed                              INFO     INFO     False
-distributed._signals                     NOTSET   INFO     True 
-distributed.active_memory_manager        NOTSET   INFO     True 
-distributed.active_memory_manager.tasks  NOTSET   INFO     True 
-distributed.batched                      NOTSET   INFO     True 
-distributed.client                       WARNING  WARNING  False
-distributed.comm                         NOTSET   INFO     True 
-distributed.comm.asyncio_tcp             NOTSET   INFO     True 
-distributed.comm.core                    NOTSET   INFO     True 
-distributed.comm.inproc                  NOTSET   INFO     True 
-distributed.comm.tcp                     NOTSET   INFO     True 
-distributed.comm.ucx                     NOTSET   INFO     True 
-distributed.comm.utils                   NOTSET   INFO     True 
-distributed.comm.ws                      NOTSET   INFO     True 
-distributed.config                       NOTSET   INFO     True 
-distributed.core                         NOTSET   INFO     True 
-distributed.dashboard.components.scheduler NOTSET   INFO     True 
-distributed.dashboard.components.worker  NOTSET   INFO     True 
-distributed.dask_ssh                     NOTSET   INFO     True 
-distributed.dask_worker                  NOTSET   INFO     True 
-distributed.deploy.adaptive              NOTSET   INFO     True 
-distributed.deploy.adaptive_core         NOTSET   INFO     True 
-distributed.deploy.cluster               NOTSET   INFO     True 
-distributed.deploy.local                 NOTSET   INFO     True 
-distributed.deploy.old_ssh               NOTSET   INFO     True 
-distributed.deploy.spec                  NOTSET   INFO     True 
-distributed.deploy.ssh                   NOTSET   INFO     True 
-distributed.diagnostics.eventstream      NOTSET   INFO     True 
-distributed.diagnostics.plugin           NOTSET   INFO     True 
-distributed.diagnostics.progress         NOTSET   INFO     True 
-distributed.diagnostics.progress_stream  NOTSET   INFO     True 
-distributed.diagnostics.progressbar      NOTSET   INFO     True 
-distributed.diagnostics.task_stream      NOTSET   INFO     True 
-distributed.diskutils                    NOTSET   INFO     True 
-distributed.event                        NOTSET   INFO     True 
-distributed.foo.bar                      NOTSET   INFO     True 
-distributed.http.proxy                   NOTSET   INFO     True 
-distributed.http.scheduler.info          NOTSET   INFO     True 
-distributed.lock                         NOTSET   INFO     True 
-distributed.multi_lock                   NOTSET   INFO     True 
-distributed.nanny                        NOTSET   INFO     True 
-distributed.preloading                   NOTSET   INFO     True 
-distributed.process                      NOTSET   INFO     True 
-distributed.protocol                     NOTSET   INFO     True 
-distributed.protocol.compression         NOTSET   INFO     True 
-distributed.protocol.core                NOTSET   INFO     True 
-distributed.protocol.pickle              NOTSET   INFO     True 
-distributed.pubsub                       NOTSET   INFO     True 
-distributed.queues                       NOTSET   INFO     True 
-distributed.recreate_tasks               NOTSET   INFO     True 
-distributed.scheduler                    NOTSET   INFO     True 
-distributed.semaphore                    NOTSET   INFO     True 
-distributed.shuffle.multi_comm           NOTSET   INFO     True 
-distributed.shuffle.multi_file           NOTSET   INFO     True 
-distributed.shuffle.shuffle_extension    NOTSET   INFO     True 
-distributed.sizeof                       NOTSET   INFO     True 
-distributed.spill                        NOTSET   INFO     True 
-distributed.stealing                     NOTSET   INFO     True 
-distributed.threadpoolexecutor           NOTSET   INFO     True 
-distributed.utils                        NOTSET   INFO     True 
-distributed.utils_comm                   NOTSET   INFO     True 
-distributed.utils_perf                   NOTSET   INFO     True 
-distributed.utils_test                   NOTSET   INFO     True 
-distributed.variable                     NOTSET   INFO     True 
-distributed.worker                       NOTSET   INFO     True 
-distributed.worker_memory                NOTSET   INFO     True 
-distributed.worker_state_machine         NOTSET   INFO     True 
-foo                                      NOTSET   ERROR    True 
-foo.bar                                  NOTSET   ERROR    False
-fsspec                                   NOTSET   ERROR    True 
-fsspec.local                             NOTSET   ERROR    True 
-h5py._conv                               NOTSET   ERROR    True 
-matplotlib                               NOTSET   ERROR    True 
-matplotlib.afm                           NOTSET   ERROR    True 
-matplotlib.artist                        NOTSET   ERROR    True 
-matplotlib.axes._axes                    NOTSET   ERROR    True 
-matplotlib.axes._base                    NOTSET   ERROR    True 
-matplotlib.axis                          NOTSET   ERROR    True 
-matplotlib.backend_bases                 NOTSET   ERROR    True 
-matplotlib.category                      NOTSET   ERROR    True 
-matplotlib.colorbar                      NOTSET   ERROR    True 
-matplotlib.dates                         NOTSET   ERROR    True 
-matplotlib.dviread                       NOTSET   ERROR    True 
-matplotlib.figure                        NOTSET   ERROR    True 
-matplotlib.font_manager                  NOTSET   ERROR    True 
-matplotlib.gridspec                      NOTSET   ERROR    True 
-matplotlib.image                         NOTSET   ERROR    True 
-matplotlib.legend                        NOTSET   ERROR    True 
-matplotlib.lines                         NOTSET   ERROR    True 
-matplotlib.mathtext                      NOTSET   ERROR    True 
-matplotlib.pyplot                        NOTSET   ERROR    True 
-matplotlib.style.core                    NOTSET   ERROR    True 
-matplotlib.text                          NOTSET   ERROR    True 
-matplotlib.textpath                      NOTSET   ERROR    True 
-matplotlib.ticker                        NOTSET   ERROR    True 
-numexpr.utils                            NOTSET   ERROR    True 
-parso.cache                              NOTSET   ERROR    True 
-parso.python.diff                        NOTSET   ERROR    True 
-pkg_resources.extern.packaging.tags      NOTSET   ERROR    True 
-prompt_toolkit.buffer                    NOTSET   ERROR    True 
-requests                                 NOTSET   ERROR    True 
-socks                                    NOTSET   ERROR    True 
-stack_data.serializing                   NOTSET   ERROR    True 
-tornado                                  CRITICAL CRITICAL False
-tornado.access                           NOTSET   CRITICAL True 
-tornado.application                      ERROR    ERROR    False
-tornado.general                          NOTSET   CRITICAL True 
-urllib3                                  NOTSET   ERROR    True 
-urllib3.connection                       NOTSET   ERROR    True 
-urllib3.connectionpool                   NOTSET   ERROR    True 
-urllib3.poolmanager                      NOTSET   ERROR    True 
-urllib3.response                         NOTSET   ERROR    True 
-urllib3.util.retry                       NOTSET   ERROR    True 
-
-PASSED
-distributed/tests/test_config.py::test_logging_empty_simple 
-== Loggers (name, level, effective level, propagate) ==
-<root>                                   ERROR    ERROR    True 
-PIL.Image                                NOTSET   ERROR    True 
-PIL.PngImagePlugin                       NOTSET   ERROR    True 
-aiohttp.access                           NOTSET   ERROR    True 
-aiohttp.client                           NOTSET   ERROR    True 
-aiohttp.internal                         NOTSET   ERROR    True 
-aiohttp.server                           NOTSET   ERROR    True 
-aiohttp.web                              NOTSET   ERROR    True 
-aiohttp.websocket                        NOTSET   ERROR    True 
-asyncio                                  NOTSET   ERROR    True 
-asyncio.events                           NOTSET   ERROR    True 
-bokeh                                    ERROR    ERROR    False
-bokeh.application                        NOTSET   ERROR    True 
-bokeh.application.application            NOTSET   ERROR    True 
-bokeh.application.handlers               NOTSET   ERROR    True 
-bokeh.application.handlers.code          NOTSET   ERROR    True 
-bokeh.application.handlers.code_runner   NOTSET   ERROR    True 
-bokeh.application.handlers.directory     NOTSET   ERROR    True 
-bokeh.application.handlers.document_lifecycle NOTSET   ERROR    True 
-bokeh.application.handlers.function      NOTSET   ERROR    True 
-bokeh.application.handlers.handler       NOTSET   ERROR    True 
-bokeh.application.handlers.lifecycle     NOTSET   ERROR    True 
-bokeh.application.handlers.notebook      NOTSET   ERROR    True 
-bokeh.application.handlers.request_handler NOTSET   ERROR    True 
-bokeh.application.handlers.script        NOTSET   ERROR    True 
-bokeh.application.handlers.server_lifecycle NOTSET   ERROR    True 
-bokeh.application.handlers.server_request_handler NOTSET   ERROR    True 
-bokeh.colors                             NOTSET   ERROR    True 
-bokeh.colors.color                       NOTSET   ERROR    True 
-bokeh.colors.groups                      NOTSET   ERROR    True 
-bokeh.colors.hsl                         NOTSET   ERROR    True 
-bokeh.colors.named                       NOTSET   ERROR    True 
-bokeh.colors.rgb                         NOTSET   ERROR    True 
-bokeh.colors.util                        NOTSET   ERROR    True 
-bokeh.core.enums                         NOTSET   ERROR    True 
-bokeh.core.has_props                     NOTSET   ERROR    True 
-bokeh.core.json_encoder                  NOTSET   ERROR    True 
-bokeh.core.properties                    NOTSET   ERROR    True 
-bokeh.core.property                      NOTSET   ERROR    True 
-bokeh.core.property._sphinx              NOTSET   ERROR    True 
-bokeh.core.property.alias                NOTSET   ERROR    True 
-bokeh.core.property.any                  NOTSET   ERROR    True 
-bokeh.core.property.auto                 NOTSET   ERROR    True 
-bokeh.core.property.bases                NOTSET   ERROR    True 
-bokeh.core.property.color                NOTSET   ERROR    True 
-bokeh.core.property.container            NOTSET   ERROR    True 
-bokeh.core.property.dataspec             NOTSET   ERROR    True 
-bokeh.core.property.datetime             NOTSET   ERROR    True 
-bokeh.core.property.descriptor_factory   NOTSET   ERROR    True 
-bokeh.core.property.descriptors          NOTSET   ERROR    True 
-bokeh.core.property.either               NOTSET   ERROR    True 
-bokeh.core.property.enum                 NOTSET   ERROR    True 
-bokeh.core.property.factors              NOTSET   ERROR    True 
-bokeh.core.property.include              NOTSET   ERROR    True 
-bokeh.core.property.instance             NOTSET   ERROR    True 
-bokeh.core.property.json                 NOTSET   ERROR    True 
-bokeh.core.property.nullable             NOTSET   ERROR    True 
-bokeh.core.property.numeric              NOTSET   ERROR    True 
-bokeh.core.property.override             NOTSET   ERROR    True 
-bokeh.core.property.pandas               NOTSET   ERROR    True 
-bokeh.core.property.primitive            NOTSET   ERROR    True 
-bokeh.core.property.readonly             NOTSET   ERROR    True 
-bokeh.core.property.singletons           NOTSET   ERROR    True 
-bokeh.core.property.string               NOTSET   ERROR    True 
-bokeh.core.property.struct               NOTSET   ERROR    True 
-bokeh.core.property.text_like            NOTSET   ERROR    True 
-bokeh.core.property.validation           NOTSET   ERROR    True 
-bokeh.core.property.visual               NOTSET   ERROR    True 
-bokeh.core.property.wrappers             NOTSET   ERROR    True 
-bokeh.core.property_mixins               NOTSET   ERROR    True 
-bokeh.core.query                         NOTSET   ERROR    True 
-bokeh.core.templates                     NOTSET   ERROR    True 
-bokeh.core.types                         NOTSET   ERROR    True 
-bokeh.core.validation                    NOTSET   ERROR    True 
-bokeh.core.validation.check              NOTSET   ERROR    True 
-bokeh.core.validation.decorators         NOTSET   ERROR    True 
-bokeh.core.validation.errors             NOTSET   ERROR    True 
-bokeh.core.validation.issue              NOTSET   ERROR    True 
-bokeh.core.validation.warnings           NOTSET   ERROR    True 
-bokeh.document                           NOTSET   ERROR    True 
-bokeh.document.callbacks                 NOTSET   ERROR    True 
-bokeh.document.document                  NOTSET   ERROR    True 
-bokeh.document.events                    NOTSET   ERROR    True 
-bokeh.document.json                      NOTSET   ERROR    True 
-bokeh.document.locking                   NOTSET   ERROR    True 
-bokeh.document.models                    NOTSET   ERROR    True 
-bokeh.document.modules                   NOTSET   ERROR    True 
-bokeh.document.util                      NOTSET   ERROR    True 
-bokeh.embed                              NOTSET   ERROR    True 
-bokeh.embed.bundle                       NOTSET   ERROR    True 
-bokeh.embed.elements                     NOTSET   ERROR    True 
-bokeh.embed.server                       NOTSET   ERROR    True 
-bokeh.embed.standalone                   NOTSET   ERROR    True 
-bokeh.embed.util                         NOTSET   ERROR    True 
-bokeh.embed.wrappers                     NOTSET   ERROR    True 
-bokeh.events                             NOTSET   ERROR    True 
-bokeh.io                                 NOTSET   ERROR    True 
-bokeh.io.doc                             NOTSET   ERROR    True 
-bokeh.io.export                          NOTSET   ERROR    True 
-bokeh.io.notebook                        NOTSET   ERROR    True 
-bokeh.io.output                          NOTSET   ERROR    True 
-bokeh.io.saving                          NOTSET   ERROR    True 
-bokeh.io.showing                         NOTSET   ERROR    True 
-bokeh.io.state                           NOTSET   ERROR    True 
-bokeh.io.util                            NOTSET   ERROR    True 
-bokeh.layouts                            NOTSET   ERROR    True 
-bokeh.model                              NOTSET   ERROR    True 
-bokeh.model.data_model                   NOTSET   ERROR    True 
-bokeh.model.docs                         NOTSET   ERROR    True 
-bokeh.model.model                        NOTSET   ERROR    True 
-bokeh.model.util                         NOTSET   ERROR    True 
-bokeh.models                             NOTSET   ERROR    True 
-bokeh.models.annotations                 NOTSET   ERROR    True 
-bokeh.models.arrow_heads                 NOTSET   ERROR    True 
-bokeh.models.axes                        NOTSET   ERROR    True 
-bokeh.models.callbacks                   NOTSET   ERROR    True 
-bokeh.models.canvas                      NOTSET   ERROR    True 
-bokeh.models.css                         NOTSET   ERROR    True 
-bokeh.models.dom                         NOTSET   ERROR    True 
-bokeh.models.expressions                 NOTSET   ERROR    True 
-bokeh.models.filters                     NOTSET   ERROR    True 
-bokeh.models.formatters                  NOTSET   ERROR    True 
-bokeh.models.glyph                       NOTSET   ERROR    True 
-bokeh.models.glyphs                      NOTSET   ERROR    True 
-bokeh.models.graphs                      NOTSET   ERROR    True 
-bokeh.models.grids                       NOTSET   ERROR    True 
-bokeh.models.labeling                    NOTSET   ERROR    True 
-bokeh.models.layouts                     NOTSET   ERROR    True 
-bokeh.models.map_plots                   NOTSET   ERROR    True 
-bokeh.models.mappers                     NOTSET   ERROR    True 
-bokeh.models.markers                     NOTSET   ERROR    True 
-bokeh.models.plots                       NOTSET   ERROR    True 
-bokeh.models.ranges                      NOTSET   ERROR    True 
-bokeh.models.renderers                   NOTSET   ERROR    True 
-bokeh.models.scales                      NOTSET   ERROR    True 
-bokeh.models.selections                  NOTSET   ERROR    True 
-bokeh.models.sources                     NOTSET   ERROR    True 
-bokeh.models.text                        NOTSET   ERROR    True 
-bokeh.models.textures                    NOTSET   ERROR    True 
-bokeh.models.tickers                     NOTSET   ERROR    True 
-bokeh.models.tiles                       NOTSET   ERROR    True 
-bokeh.models.tools                       NOTSET   ERROR    True 
-bokeh.models.transforms                  NOTSET   ERROR    True 
-bokeh.models.widgets                     NOTSET   ERROR    True 
-bokeh.models.widgets.buttons             NOTSET   ERROR    True 
-bokeh.models.widgets.groups              NOTSET   ERROR    True 
-bokeh.models.widgets.icons               NOTSET   ERROR    True 
-bokeh.models.widgets.inputs              NOTSET   ERROR    True 
-bokeh.models.widgets.markups             NOTSET   ERROR    True 
-bokeh.models.widgets.panels              NOTSET   ERROR    True 
-bokeh.models.widgets.sliders             NOTSET   ERROR    True 
-bokeh.models.widgets.tables              NOTSET   ERROR    True 
-bokeh.models.widgets.widget              NOTSET   ERROR    True 
-bokeh.palettes                           NOTSET   ERROR    True 
-bokeh.plotting                           NOTSET   ERROR    True 
-bokeh.plotting._decorators               NOTSET   ERROR    True 
-bokeh.plotting._docstring                NOTSET   ERROR    True 
-bokeh.plotting._graph                    NOTSET   ERROR    True 
-bokeh.plotting._legends                  NOTSET   ERROR    True 
-bokeh.plotting._plot                     NOTSET   ERROR    True 
-bokeh.plotting._renderer                 NOTSET   ERROR    True 
-bokeh.plotting._stack                    NOTSET   ERROR    True 
-bokeh.plotting._tools                    NOTSET   ERROR    True 
-bokeh.plotting.figure                    NOTSET   ERROR    True 
-bokeh.plotting.glyph_api                 NOTSET   ERROR    True 
-bokeh.plotting.gmap                      NOTSET   ERROR    True 
-bokeh.plotting.graph                     NOTSET   ERROR    True 
-bokeh.protocol                           NOTSET   ERROR    True 
-bokeh.protocol.exceptions                NOTSET   ERROR    True 
-bokeh.protocol.message                   NOTSET   ERROR    True 
-bokeh.protocol.messages                  NOTSET   ERROR    True 
-bokeh.protocol.messages.ack              NOTSET   ERROR    True 
-bokeh.protocol.messages.error            NOTSET   ERROR    True 
-bokeh.protocol.messages.ok               NOTSET   ERROR    True 
-bokeh.protocol.messages.patch_doc        NOTSET   ERROR    True 
-bokeh.protocol.messages.pull_doc_reply   NOTSET   ERROR    True 
-bokeh.protocol.messages.pull_doc_req     NOTSET   ERROR    True 
-bokeh.protocol.messages.push_doc         NOTSET   ERROR    True 
-bokeh.protocol.messages.server_info_reply NOTSET   ERROR    True 
-bokeh.protocol.messages.server_info_req  NOTSET   ERROR    True 
-bokeh.protocol.receiver                  NOTSET   ERROR    True 
-bokeh.resources                          NOTSET   ERROR    True 
-bokeh.sampledata                         NOTSET   ERROR    True 
-bokeh.server.auth_provider               NOTSET   ERROR    True 
-bokeh.server.callbacks                   NOTSET   ERROR    True 
-bokeh.server.connection                  NOTSET   ERROR    True 
-bokeh.server.contexts                    NOTSET   ERROR    True 
-bokeh.server.protocol_handler            NOTSET   ERROR    True 
-bokeh.server.server                      NOTSET   ERROR    True 
-bokeh.server.session                     NOTSET   ERROR    True 
-bokeh.server.tornado                     NOTSET   ERROR    True 
-bokeh.server.urls                        NOTSET   ERROR    True 
-bokeh.server.util                        NOTSET   ERROR    True 
-bokeh.server.views.auth_mixin            NOTSET   ERROR    True 
-bokeh.server.views.autoload_js_handler   NOTSET   ERROR    True 
-bokeh.server.views.doc_handler           NOTSET   ERROR    True 
-bokeh.server.views.metadata_handler      NOTSET   ERROR    True 
-bokeh.server.views.multi_root_static_handler NOTSET   ERROR    True 
-bokeh.server.views.root_handler          NOTSET   ERROR    True 
-bokeh.server.views.session_handler       NOTSET   ERROR    True 
-bokeh.server.views.static_handler        NOTSET   ERROR    True 
-bokeh.server.views.ws                    NOTSET   ERROR    True 
-bokeh.settings                           NOTSET   ERROR    True 
-bokeh.themes                             NOTSET   ERROR    True 
-bokeh.themes.theme                       NOTSET   ERROR    True 
-bokeh.transform                          NOTSET   ERROR    True 
-bokeh.util.browser                       NOTSET   ERROR    True 
-bokeh.util.callback_manager              NOTSET   ERROR    True 
-bokeh.util.compiler                      NOTSET   ERROR    True 
-bokeh.util.dataclasses                   NOTSET   ERROR    True 
-bokeh.util.datatypes                     NOTSET   ERROR    True 
-bokeh.util.dependencies                  NOTSET   ERROR    True 
-bokeh.util.deprecation                   NOTSET   ERROR    True 
-bokeh.util.functions                     NOTSET   ERROR    True 
-bokeh.util.logconfig                     NOTSET   ERROR    True 
-bokeh.util.options                       NOTSET   ERROR    True 
-bokeh.util.paths                         NOTSET   ERROR    True 
-bokeh.util.sampledata                    NOTSET   ERROR    True 
-bokeh.util.serialization                 NOTSET   ERROR    True 
-bokeh.util.string                        NOTSET   ERROR    True 
-bokeh.util.token                         NOTSET   ERROR    True 
-bokeh.util.tornado                       NOTSET   ERROR    True 
-bokeh.util.version                       NOTSET   ERROR    True 
-bokeh.util.warnings                      NOTSET   ERROR    True 
-charset_normalizer                       NOTSET   ERROR    True 
-concurrent.futures                       NOTSET   ERROR    True 
-dask.dataframe.shuffle                   NOTSET   ERROR    True 
-dask.sizeof                              NOTSET   ERROR    True 
-distributed                              INFO     INFO     False
-distributed._signals                     NOTSET   INFO     True 
-distributed.active_memory_manager        NOTSET   INFO     True 
-distributed.active_memory_manager.tasks  NOTSET   INFO     True 
-distributed.batched                      NOTSET   INFO     True 
-distributed.client                       WARNING  WARNING  False
-distributed.comm                         NOTSET   INFO     True 
-distributed.comm.asyncio_tcp             NOTSET   INFO     True 
-distributed.comm.core                    NOTSET   INFO     True 
-distributed.comm.inproc                  NOTSET   INFO     True 
-distributed.comm.tcp                     NOTSET   INFO     True 
-distributed.comm.ucx                     NOTSET   INFO     True 
-distributed.comm.utils                   NOTSET   INFO     True 
-distributed.comm.ws                      NOTSET   INFO     True 
-distributed.config                       NOTSET   INFO     True 
-distributed.core                         NOTSET   INFO     True 
-distributed.dashboard.components.scheduler NOTSET   INFO     True 
-distributed.dashboard.components.worker  NOTSET   INFO     True 
-distributed.dask_ssh                     NOTSET   INFO     True 
-distributed.dask_worker                  NOTSET   INFO     True 
-distributed.deploy.adaptive              NOTSET   INFO     True 
-distributed.deploy.adaptive_core         NOTSET   INFO     True 
-distributed.deploy.cluster               NOTSET   INFO     True 
-distributed.deploy.local                 NOTSET   INFO     True 
-distributed.deploy.old_ssh               NOTSET   INFO     True 
-distributed.deploy.spec                  NOTSET   INFO     True 
-distributed.deploy.ssh                   NOTSET   INFO     True 
-distributed.diagnostics.eventstream      NOTSET   INFO     True 
-distributed.diagnostics.plugin           NOTSET   INFO     True 
-distributed.diagnostics.progress         NOTSET   INFO     True 
-distributed.diagnostics.progress_stream  NOTSET   INFO     True 
-distributed.diagnostics.progressbar      NOTSET   INFO     True 
-distributed.diagnostics.task_stream      NOTSET   INFO     True 
-distributed.diskutils                    NOTSET   INFO     True 
-distributed.event                        NOTSET   INFO     True 
-distributed.foo.bar                      NOTSET   INFO     True 
-distributed.http.proxy                   NOTSET   INFO     True 
-distributed.http.scheduler.info          NOTSET   INFO     True 
-distributed.lock                         NOTSET   INFO     True 
-distributed.multi_lock                   NOTSET   INFO     True 
-distributed.nanny                        NOTSET   INFO     True 
-distributed.preloading                   NOTSET   INFO     True 
-distributed.process                      NOTSET   INFO     True 
-distributed.protocol                     NOTSET   INFO     True 
-distributed.protocol.compression         NOTSET   INFO     True 
-distributed.protocol.core                NOTSET   INFO     True 
-distributed.protocol.pickle              NOTSET   INFO     True 
-distributed.pubsub                       NOTSET   INFO     True 
-distributed.queues                       NOTSET   INFO     True 
-distributed.recreate_tasks               NOTSET   INFO     True 
-distributed.scheduler                    NOTSET   INFO     True 
-distributed.semaphore                    NOTSET   INFO     True 
-distributed.shuffle.multi_comm           NOTSET   INFO     True 
-distributed.shuffle.multi_file           NOTSET   INFO     True 
-distributed.shuffle.shuffle_extension    NOTSET   INFO     True 
-distributed.sizeof                       NOTSET   INFO     True 
-distributed.spill                        NOTSET   INFO     True 
-distributed.stealing                     NOTSET   INFO     True 
-distributed.threadpoolexecutor           NOTSET   INFO     True 
-distributed.utils                        NOTSET   INFO     True 
-distributed.utils_comm                   NOTSET   INFO     True 
-distributed.utils_perf                   NOTSET   INFO     True 
-distributed.utils_test                   NOTSET   INFO     True 
-distributed.variable                     NOTSET   INFO     True 
-distributed.worker                       NOTSET   INFO     True 
-distributed.worker_memory                NOTSET   INFO     True 
-distributed.worker_state_machine         NOTSET   INFO     True 
-foo                                      NOTSET   ERROR    True 
-foo.bar                                  NOTSET   ERROR    False
-fsspec                                   NOTSET   ERROR    True 
-fsspec.local                             NOTSET   ERROR    True 
-h5py._conv                               NOTSET   ERROR    True 
-matplotlib                               NOTSET   ERROR    True 
-matplotlib.afm                           NOTSET   ERROR    True 
-matplotlib.artist                        NOTSET   ERROR    True 
-matplotlib.axes._axes                    NOTSET   ERROR    True 
-matplotlib.axes._base                    NOTSET   ERROR    True 
-matplotlib.axis                          NOTSET   ERROR    True 
-matplotlib.backend_bases                 NOTSET   ERROR    True 
-matplotlib.category                      NOTSET   ERROR    True 
-matplotlib.colorbar                      NOTSET   ERROR    True 
-matplotlib.dates                         NOTSET   ERROR    True 
-matplotlib.dviread                       NOTSET   ERROR    True 
-matplotlib.figure                        NOTSET   ERROR    True 
-matplotlib.font_manager                  NOTSET   ERROR    True 
-matplotlib.gridspec                      NOTSET   ERROR    True 
-matplotlib.image                         NOTSET   ERROR    True 
-matplotlib.legend                        NOTSET   ERROR    True 
-matplotlib.lines                         NOTSET   ERROR    True 
-matplotlib.mathtext                      NOTSET   ERROR    True 
-matplotlib.pyplot                        NOTSET   ERROR    True 
-matplotlib.style.core                    NOTSET   ERROR    True 
-matplotlib.text                          NOTSET   ERROR    True 
-matplotlib.textpath                      NOTSET   ERROR    True 
-matplotlib.ticker                        NOTSET   ERROR    True 
-numexpr.utils                            NOTSET   ERROR    True 
-parso.cache                              NOTSET   ERROR    True 
-parso.python.diff                        NOTSET   ERROR    True 
-pkg_resources.extern.packaging.tags      NOTSET   ERROR    True 
-prompt_toolkit.buffer                    NOTSET   ERROR    True 
-requests                                 NOTSET   ERROR    True 
-socks                                    NOTSET   ERROR    True 
-stack_data.serializing                   NOTSET   ERROR    True 
-tornado                                  CRITICAL CRITICAL False
-tornado.access                           NOTSET   CRITICAL True 
-tornado.application                      ERROR    ERROR    False
-tornado.general                          NOTSET   CRITICAL True 
-urllib3                                  NOTSET   ERROR    True 
-urllib3.connection                       NOTSET   ERROR    True 
-urllib3.connectionpool                   NOTSET   ERROR    True 
-urllib3.poolmanager                      NOTSET   ERROR    True 
-urllib3.response                         NOTSET   ERROR    True 
-urllib3.util.retry                       NOTSET   ERROR    True 
-
-PASSED
-distributed/tests/test_config.py::test_logging_simple_under_distributed PASSED
-distributed/tests/test_config.py::test_logging_simple PASSED
-distributed/tests/test_config.py::test_logging_extended ['INFO: distributed.foo: 1: info', 'ERROR: distributed.foo.bar: 3: error', 'WARNING: distributed: 5: warning']
-PASSED
-distributed/tests/test_config.py::test_logging_mutual_exclusive PASSED
-distributed/tests/test_config.py::test_logging_file_config PASSED
-distributed/tests/test_config.py::test_schema PASSED
-distributed/tests/test_config.py::test_schema_is_complete PASSED
-distributed/tests/test_config.py::test_uvloop_event_loop SKIPPED (co...)
-distributed/tests/test_core.py::test_async_task_group_initialization PASSED
-distributed/tests/test_core.py::test_async_task_group_call_soon_executes_task_in_background PASSED
-distributed/tests/test_core.py::test_async_task_group_call_later_executes_delayed_task_in_background PASSED
-distributed/tests/test_core.py::test_async_task_group_close_closes PASSED
-distributed/tests/test_core.py::test_async_task_group_close_does_not_cancel_existing_tasks PASSED
-distributed/tests/test_core.py::test_async_task_group_close_prohibits_new_tasks PASSED
-distributed/tests/test_core.py::test_async_task_group_stop_disallows_shutdown PASSED
-distributed/tests/test_core.py::test_async_task_group_stop_cancels_long_running PASSED
-distributed/tests/test_core.py::test_server_status_is_always_enum PASSED
-distributed/tests/test_core.py::test_server_assign_assign_enum_is_quiet PASSED
-distributed/tests/test_core.py::test_server_status_compare_enum_is_quiet PASSED
-distributed/tests/test_core.py::test_server PASSED
-distributed/tests/test_core.py::test_server_raises_on_blocked_handlers 2022-08-26 14:05:44,416 - distributed.core - ERROR - Exception while handling op ping
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 768, in _handle_comm
-    result = handler(**msg)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 92, in _raise
-    raise exc
-ValueError: The 'ping' handler has been explicitly disallowed in Server, possibly due to security concerns.
-PASSED
-distributed/tests/test_core.py::test_server_listen SKIPPED (need --r...)
-distributed/tests/test_core.py::test_rpc_default PASSED
-distributed/tests/test_core.py::test_rpc_tcp PASSED
-distributed/tests/test_core.py::test_rpc_tls PASSED
-distributed/tests/test_core.py::test_rpc_inproc PASSED
-distributed/tests/test_core.py::test_rpc_inputs PASSED
-distributed/tests/test_core.py::test_rpc_message_lifetime_default PASSED
-distributed/tests/test_core.py::test_rpc_message_lifetime_tcp PASSED
-distributed/tests/test_core.py::test_rpc_message_lifetime_inproc PASSED
-distributed/tests/test_core.py::test_rpc_with_many_connections_tcp PASSED
-distributed/tests/test_core.py::test_rpc_with_many_connections_inproc PASSED
-distributed/tests/test_core.py::test_large_packets_tcp SKIPPED (need...)
-distributed/tests/test_core.py::test_large_packets_inproc PASSED
-distributed/tests/test_core.py::test_identity_tcp PASSED
-distributed/tests/test_core.py::test_identity_inproc PASSED
-distributed/tests/test_core.py::test_ports PASSED
-distributed/tests/test_core.py::test_errors 2022-08-26 14:05:44,740 - distributed.core - ERROR - Exception while handling op div
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 766, in _handle_comm
-    result = handler(comm, **msg)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/tests/test_core.py", line 646, in stream_div
-    return x / y
-ZeroDivisionError: division by zero
-PASSED
-distributed/tests/test_core.py::test_connect_raises PASSED
-distributed/tests/test_core.py::test_send_recv_args PASSED
-distributed/tests/test_core.py::test_send_recv_cancelled PASSED
-distributed/tests/test_core.py::test_coerce_to_address PASSED
-distributed/tests/test_core.py::test_connection_pool 2022-08-26 14:05:45,015 - distributed.core - INFO - Collecting unused comms.  open: 5, active: 0, connecting: 0
-2022-08-26 14:05:45,121 - distributed.core - INFO - Collecting unused comms.  open: 5, active: 0, connecting: 0
-2022-08-26 14:05:45,331 - distributed.core - INFO - Collecting unused comms.  open: 5, active: 0, connecting: 0
-2022-08-26 14:05:45,438 - distributed.core - INFO - Collecting unused comms.  open: 5, active: 0, connecting: 0
-2022-08-26 14:05:45,543 - distributed.core - INFO - Collecting unused comms.  open: 3, active: 0, connecting: 0
-PASSED
-distributed/tests/test_core.py::test_connection_pool_close_while_connecting PASSED
-distributed/tests/test_core.py::test_connection_pool_outside_cancellation PASSED
-distributed/tests/test_core.py::test_connection_pool_respects_limit 2022-08-26 14:05:45,613 - distributed.core - INFO - Collecting unused comms.  open: 5, active: 4, connecting: 5
-2022-08-26 14:05:45,625 - distributed.core - INFO - Collecting unused comms.  open: 5, active: 0, connecting: 4
-PASSED
-distributed/tests/test_core.py::test_connection_pool_tls 2022-08-26 14:05:45,685 - distributed.core - INFO - Collecting unused comms.  open: 5, active: 3, connecting: 0
-2022-08-26 14:05:45,703 - distributed.core - INFO - Collecting unused comms.  open: 5, active: 1, connecting: 0
-2022-08-26 14:05:45,715 - distributed.core - INFO - Collecting unused comms.  open: 5, active: 4, connecting: 9
-2022-08-26 14:05:45,727 - distributed.core - INFO - Collecting unused comms.  open: 5, active: 4, connecting: 4
-2022-08-26 14:05:45,731 - distributed.core - INFO - Collecting unused comms.  open: 4, active: 0, connecting: 4
-PASSED
-distributed/tests/test_core.py::test_connection_pool_remove 2022-08-26 14:05:45,791 - distributed.core - INFO - Removing comms to tcp://192.168.1.159:40613
-2022-08-26 14:05:45,791 - distributed.core - INFO - Collecting unused comms.  open: 4, active: 0, connecting: 0
-2022-08-26 14:05:45,793 - distributed.core - INFO - Removing comms to tcp://192.168.1.159:40613
-PASSED
-distributed/tests/test_core.py::test_counters 2022-08-26 14:05:45,800 - distributed.core - ERROR - Exception while handling op div
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 766, in _handle_comm
-    result = handler(comm, **msg)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/tests/test_core.py", line 646, in stream_div
-    return x / y
-ZeroDivisionError: division by zero
-PASSED
-distributed/tests/test_core.py::test_ticks 2022-08-26 14:05:45,806 - distributed.scheduler - INFO - State start
-2022-08-26 14:05:45,808 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:05:45,808 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:43421
-2022-08-26 14:05:45,808 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:40719
-2022-08-26 14:05:45,813 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38555
-2022-08-26 14:05:45,813 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38555
-2022-08-26 14:05:45,813 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:05:45,813 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34597
-2022-08-26 14:05:45,813 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43421
-2022-08-26 14:05:45,813 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:45,813 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:05:45,813 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:05:45,813 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-vq4pfdqn
-2022-08-26 14:05:45,813 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:45,814 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45973
-2022-08-26 14:05:45,814 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45973
-2022-08-26 14:05:45,814 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:05:45,814 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33039
-2022-08-26 14:05:45,814 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43421
-2022-08-26 14:05:45,814 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:45,814 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:05:45,814 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:05:45,814 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-p5ksdcq1
-2022-08-26 14:05:45,814 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:45,817 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38555', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:05:45,817 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38555
-2022-08-26 14:05:45,817 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:45,818 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45973', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:05:45,818 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45973
-2022-08-26 14:05:45,818 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:45,820 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43421
-2022-08-26 14:05:45,820 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:45,820 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43421
-2022-08-26 14:05:45,820 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:45,820 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:45,820 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:45,833 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:38555
-2022-08-26 14:05:45,833 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:45973
-2022-08-26 14:05:45,834 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:38555', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:05:45,834 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:38555
-2022-08-26 14:05:45,834 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:45973', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:05:45,834 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:45973
-2022-08-26 14:05:45,834 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:05:45,834 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-0d4f00fa-48d9-449e-9f03-22c455d4153f Address tcp://127.0.0.1:38555 Status: Status.closing
-2022-08-26 14:05:45,835 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-6cf99f83-57f5-45eb-bdeb-480c07c2a9bb Address tcp://127.0.0.1:45973 Status: Status.closing
-2022-08-26 14:05:45,835 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:05:45,835 - distributed.scheduler - INFO - Scheduler closing all comms
-SKIPPED (could not import...)
-distributed/tests/test_core.py::test_tick_logging 2022-08-26 14:05:45,841 - distributed.scheduler - INFO - State start
-2022-08-26 14:05:45,843 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:05:45,843 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:35511
-2022-08-26 14:05:45,843 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:42077
-2022-08-26 14:05:45,847 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:46137
-2022-08-26 14:05:45,847 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:46137
-2022-08-26 14:05:45,847 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:05:45,847 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34271
-2022-08-26 14:05:45,847 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35511
-2022-08-26 14:05:45,847 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:45,847 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:05:45,848 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:05:45,848 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-kbggg9kv
-2022-08-26 14:05:45,848 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:45,848 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37485
-2022-08-26 14:05:45,848 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37485
-2022-08-26 14:05:45,848 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:05:45,848 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36199
-2022-08-26 14:05:45,848 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35511
-2022-08-26 14:05:45,848 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:45,848 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:05:45,848 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:05:45,849 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-81vbylzn
-2022-08-26 14:05:45,849 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:45,851 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:46137', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:05:45,852 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:46137
-2022-08-26 14:05:45,852 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:45,852 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37485', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:05:45,852 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37485
-2022-08-26 14:05:45,852 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:45,852 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35511
-2022-08-26 14:05:45,853 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:45,853 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35511
-2022-08-26 14:05:45,853 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:45,853 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:45,853 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:45,865 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:46137
-2022-08-26 14:05:45,866 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:37485
-2022-08-26 14:05:45,867 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:46137', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:05:45,867 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:46137
-2022-08-26 14:05:45,867 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:37485', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:05:45,867 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:37485
-2022-08-26 14:05:45,867 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:05:45,867 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-cba423d5-2527-43c0-90fa-50cd40e412ea Address tcp://127.0.0.1:46137 Status: Status.closing
-2022-08-26 14:05:45,867 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-f56f95f2-1e81-4766-8e77-32d6dc07a4bb Address tcp://127.0.0.1:37485 Status: Status.closing
-2022-08-26 14:05:45,868 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:05:45,868 - distributed.scheduler - INFO - Scheduler closing all comms
-SKIPPED (could not...)
-distributed/tests/test_core.py::test_compression[echo_serialize-None] PASSED
-distributed/tests/test_core.py::test_compression[echo_serialize-False] PASSED
-distributed/tests/test_core.py::test_compression[echo_serialize-zlib] PASSED
-distributed/tests/test_core.py::test_compression[echo_serialize-lz4] PASSED
-distributed/tests/test_core.py::test_compression[echo_serialize-zstd] PASSED
-distributed/tests/test_core.py::test_compression[echo_no_serialize-None] PASSED
-distributed/tests/test_core.py::test_compression[echo_no_serialize-False] PASSED
-distributed/tests/test_core.py::test_compression[echo_no_serialize-zlib] PASSED
-distributed/tests/test_core.py::test_compression[echo_no_serialize-lz4] PASSED
-distributed/tests/test_core.py::test_compression[echo_no_serialize-zstd] PASSED
-distributed/tests/test_core.py::test_rpc_serialization 2022-08-26 14:05:45,957 - distributed.protocol.core - CRITICAL - Failed to Serialize
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/core.py", line 109, in dumps
-    frames[0] = msgpack.dumps(msg, default=_encode_default, use_bin_type=True)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/msgpack/__init__.py", line 38, in packb
-    return Packer(**kwargs).pack(o)
-  File "msgpack/_packer.pyx", line 294, in msgpack._cmsgpack.Packer.pack
-  File "msgpack/_packer.pyx", line 300, in msgpack._cmsgpack.Packer.pack
-  File "msgpack/_packer.pyx", line 297, in msgpack._cmsgpack.Packer.pack
-  File "msgpack/_packer.pyx", line 231, in msgpack._cmsgpack.Packer._pack
-  File "msgpack/_packer.pyx", line 285, in msgpack._cmsgpack.Packer._pack
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/core.py", line 100, in _encode_default
-    frames.extend(create_serialized_sub_frames(obj))
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/core.py", line 60, in create_serialized_sub_frames
-    sub_header, sub_frames = serialize_and_split(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/serialize.py", line 444, in serialize_and_split
-    header, frames = serialize(x, serializers, on_error, context)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/serialize.py", line 266, in serialize
-    return serialize(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/serialize.py", line 366, in serialize
-    raise TypeError(msg, str(x)[:10000])
-TypeError: ('Could not serialize object of type function', '<function inc at 0x5640384ccd40>')
-2022-08-26 14:05:45,957 - distributed.comm.utils - INFO - Unserializable Message: {'op': 'echo', 'x': <Serialize: <function inc at 0x5640384ccd40>>, 'reply': True, 'serializers': ['msgpack']}
-2022-08-26 14:05:45,957 - distributed.comm.utils - ERROR - ('Could not serialize object of type function', '<function inc at 0x5640384ccd40>')
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/utils.py", line 55, in _to_frames
-    return list(protocol.dumps(msg, **kwargs))
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/core.py", line 109, in dumps
-    frames[0] = msgpack.dumps(msg, default=_encode_default, use_bin_type=True)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/msgpack/__init__.py", line 38, in packb
-    return Packer(**kwargs).pack(o)
-  File "msgpack/_packer.pyx", line 294, in msgpack._cmsgpack.Packer.pack
-  File "msgpack/_packer.pyx", line 300, in msgpack._cmsgpack.Packer.pack
-  File "msgpack/_packer.pyx", line 297, in msgpack._cmsgpack.Packer.pack
-  File "msgpack/_packer.pyx", line 231, in msgpack._cmsgpack.Packer._pack
-  File "msgpack/_packer.pyx", line 285, in msgpack._cmsgpack.Packer._pack
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/core.py", line 100, in _encode_default
-    frames.extend(create_serialized_sub_frames(obj))
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/core.py", line 60, in create_serialized_sub_frames
-    sub_header, sub_frames = serialize_and_split(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/serialize.py", line 444, in serialize_and_split
-    header, frames = serialize(x, serializers, on_error, context)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/serialize.py", line 266, in serialize
-    return serialize(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/serialize.py", line 366, in serialize
-    raise TypeError(msg, str(x)[:10000])
-TypeError: ('Could not serialize object of type function', '<function inc at 0x5640384ccd40>')
-PASSED
-distributed/tests/test_core.py::test_thread_id 2022-08-26 14:05:45,964 - distributed.scheduler - INFO - State start
-2022-08-26 14:05:45,966 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:05:45,966 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:36455
-2022-08-26 14:05:45,966 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:40795
-2022-08-26 14:05:45,970 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:46255
-2022-08-26 14:05:45,971 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:46255
-2022-08-26 14:05:45,971 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:05:45,971 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39959
-2022-08-26 14:05:45,971 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36455
-2022-08-26 14:05:45,971 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:45,971 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:05:45,971 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:05:45,971 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-p9f2teg_
-2022-08-26 14:05:45,971 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:45,971 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35545
-2022-08-26 14:05:45,972 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35545
-2022-08-26 14:05:45,972 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:05:45,972 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45135
-2022-08-26 14:05:45,972 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36455
-2022-08-26 14:05:45,972 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:45,972 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:05:45,972 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:05:45,972 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-6q1x7etg
-2022-08-26 14:05:45,972 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:45,975 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:46255', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:05:45,975 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:46255
-2022-08-26 14:05:45,975 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:45,975 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:35545', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:05:45,976 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:35545
-2022-08-26 14:05:45,976 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:45,976 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36455
-2022-08-26 14:05:45,976 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:45,976 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36455
-2022-08-26 14:05:45,976 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:45,977 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:45,977 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:45,988 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:46255
-2022-08-26 14:05:45,988 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:35545
-2022-08-26 14:05:45,989 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:46255', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:05:45,989 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:46255
-2022-08-26 14:05:45,989 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:35545', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:05:45,989 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:35545
-2022-08-26 14:05:45,989 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:05:45,989 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-1eec7750-8381-43d1-8905-ea1856143920 Address tcp://127.0.0.1:46255 Status: Status.closing
-2022-08-26 14:05:45,990 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-fe828e5d-7fc6-462c-b5ba-c31389c655ef Address tcp://127.0.0.1:35545 Status: Status.closing
-2022-08-26 14:05:45,990 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:05:45,990 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:05:46,189 - distributed.utils_perf - WARNING - full garbage collections took 60% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_core.py::test_deserialize_error 2022-08-26 14:05:46,197 - distributed.core - ERROR - Exception while handling op throws
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 768, in _handle_comm
-    result = handler(**msg)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/utils_test.py", line 238, in throws
-    raise RuntimeError("hello!")
-RuntimeError: hello!
-PASSED
-distributed/tests/test_core.py::test_connection_pool_detects_remote_close PASSED
-distributed/tests/test_core.py::test_close_properly PASSED
-distributed/tests/test_core.py::test_server_redundant_kwarg PASSED
-distributed/tests/test_core.py::test_server_comms_mark_active_handlers PASSED
-distributed/tests/test_core.py::test_close_fast_without_active_handlers[True] PASSED
-distributed/tests/test_core.py::test_close_fast_without_active_handlers[False] PASSED
-distributed/tests/test_core.py::test_close_grace_period_for_handlers PASSED
-distributed/tests/test_core.py::test_expects_comm PASSED
-distributed/tests/test_core.py::test_async_listener_stop PASSED
-distributed/tests/test_counter.py::test_digest[Counter-<lambda>] PASSED
-distributed/tests/test_counter.py::test_digest[None-<lambda>] SKIPPED
-distributed/tests/test_counter.py::test_counter PASSED
-distributed/tests/test_dask_collections.py::test_dataframes 2022-08-26 14:05:48,637 - distributed.scheduler - INFO - State start
-2022-08-26 14:05:48,638 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:05:48,638 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:36835
-2022-08-26 14:05:48,638 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:35691
-2022-08-26 14:05:48,643 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37227
-2022-08-26 14:05:48,643 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37227
-2022-08-26 14:05:48,643 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:05:48,643 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42403
-2022-08-26 14:05:48,643 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36835
-2022-08-26 14:05:48,643 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:48,643 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:05:48,643 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:05:48,643 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-nfuxca05
-2022-08-26 14:05:48,643 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:48,644 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40219
-2022-08-26 14:05:48,644 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40219
-2022-08-26 14:05:48,644 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:05:48,644 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41865
-2022-08-26 14:05:48,644 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36835
-2022-08-26 14:05:48,644 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:48,644 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:05:48,644 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:05:48,644 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-x5xwdiqs
-2022-08-26 14:05:48,644 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:48,647 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37227', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:05:48,647 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37227
-2022-08-26 14:05:48,647 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:48,648 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40219', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:05:48,648 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40219
-2022-08-26 14:05:48,648 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:48,648 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36835
-2022-08-26 14:05:48,648 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:48,648 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36835
-2022-08-26 14:05:48,648 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:48,650 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:48,650 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:48,664 - distributed.scheduler - INFO - Receive client connection: Client-dc13cf66-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:48,664 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:49,205 - distributed.scheduler - INFO - Remove client Client-dc13cf66-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:49,206 - distributed.scheduler - INFO - Remove client Client-dc13cf66-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:49,206 - distributed.scheduler - INFO - Close client connection: Client-dc13cf66-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:49,206 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:37227
-2022-08-26 14:05:49,207 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:40219
-2022-08-26 14:05:49,208 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:37227', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:05:49,208 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:37227
-2022-08-26 14:05:49,208 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:40219', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:05:49,208 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:40219
-2022-08-26 14:05:49,208 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:05:49,208 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-51bf9440-ca32-4678-b893-e4e83a6cf5fc Address tcp://127.0.0.1:37227 Status: Status.closing
-2022-08-26 14:05:49,209 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-5d8661d4-fe03-453d-bd59-c8fb8964347b Address tcp://127.0.0.1:40219 Status: Status.closing
-2022-08-26 14:05:49,210 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:05:49,210 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:05:49,412 - distributed.utils_perf - WARNING - full garbage collections took 57% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_dask_collections.py::test_dask_array_collections 2022-08-26 14:05:49,418 - distributed.scheduler - INFO - State start
-2022-08-26 14:05:49,420 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:05:49,420 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:43001
-2022-08-26 14:05:49,420 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:42205
-2022-08-26 14:05:49,425 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42795
-2022-08-26 14:05:49,425 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42795
-2022-08-26 14:05:49,425 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:05:49,425 - distributed.worker - INFO -          dashboard at:            127.0.0.1:37857
-2022-08-26 14:05:49,425 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43001
-2022-08-26 14:05:49,425 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:49,425 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:05:49,425 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:05:49,425 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-xjjwxe2f
-2022-08-26 14:05:49,425 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:49,426 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39473
-2022-08-26 14:05:49,426 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39473
-2022-08-26 14:05:49,426 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:05:49,426 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34127
-2022-08-26 14:05:49,426 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43001
-2022-08-26 14:05:49,426 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:49,426 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:05:49,426 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:05:49,426 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-x0ewuwya
-2022-08-26 14:05:49,426 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:49,429 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42795', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:05:49,429 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42795
-2022-08-26 14:05:49,429 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:49,430 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39473', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:05:49,430 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39473
-2022-08-26 14:05:49,430 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:49,430 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43001
-2022-08-26 14:05:49,430 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:49,430 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43001
-2022-08-26 14:05:49,430 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:49,431 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:49,431 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:49,444 - distributed.scheduler - INFO - Receive client connection: Client-dc8ae9da-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:49,445 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:49,699 - distributed.scheduler - INFO - Remove client Client-dc8ae9da-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:49,699 - distributed.scheduler - INFO - Remove client Client-dc8ae9da-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:49,699 - distributed.scheduler - INFO - Close client connection: Client-dc8ae9da-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:49,700 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42795
-2022-08-26 14:05:49,700 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:39473
-2022-08-26 14:05:49,701 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:42795', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:05:49,701 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:42795
-2022-08-26 14:05:49,701 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:39473', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:05:49,701 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:39473
-2022-08-26 14:05:49,702 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:05:49,702 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-f8a328a0-84e7-400a-8047-3d9e27a69df7 Address tcp://127.0.0.1:42795 Status: Status.closing
-2022-08-26 14:05:49,702 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-ec182b56-fe4d-4365-b8a0-f31ae072012b Address tcp://127.0.0.1:39473 Status: Status.closing
-2022-08-26 14:05:49,703 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:05:49,703 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:05:49,907 - distributed.utils_perf - WARNING - full garbage collections took 56% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_dask_collections.py::test_bag_groupby_tasks_default 2022-08-26 14:05:49,913 - distributed.scheduler - INFO - State start
-2022-08-26 14:05:49,915 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:05:49,915 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:43069
-2022-08-26 14:05:49,915 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:39693
-2022-08-26 14:05:49,919 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40205
-2022-08-26 14:05:49,920 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40205
-2022-08-26 14:05:49,920 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:05:49,920 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42991
-2022-08-26 14:05:49,920 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43069
-2022-08-26 14:05:49,920 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:49,920 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:05:49,920 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:05:49,920 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-jouv2gb_
-2022-08-26 14:05:49,920 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:49,920 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:41089
-2022-08-26 14:05:49,920 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:41089
-2022-08-26 14:05:49,920 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:05:49,921 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39715
-2022-08-26 14:05:49,921 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43069
-2022-08-26 14:05:49,921 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:49,921 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:05:49,921 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:05:49,921 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-89pbet04
-2022-08-26 14:05:49,921 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:49,924 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40205', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:05:49,924 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40205
-2022-08-26 14:05:49,924 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:49,924 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:41089', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:05:49,925 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:41089
-2022-08-26 14:05:49,925 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:49,925 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43069
-2022-08-26 14:05:49,925 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:49,925 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43069
-2022-08-26 14:05:49,925 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:49,925 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:49,925 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:49,939 - distributed.scheduler - INFO - Receive client connection: Client-dcd665b2-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:49,939 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:49,950 - distributed.scheduler - INFO - Remove client Client-dcd665b2-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:49,950 - distributed.scheduler - INFO - Remove client Client-dcd665b2-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:49,951 - distributed.scheduler - INFO - Close client connection: Client-dcd665b2-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:49,951 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:40205
-2022-08-26 14:05:49,951 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:41089
-2022-08-26 14:05:49,952 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:40205', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:05:49,952 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:40205
-2022-08-26 14:05:49,952 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:41089', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:05:49,952 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:41089
-2022-08-26 14:05:49,952 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:05:49,953 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-b787c65e-1a4f-4541-9c14-926ae83f33ed Address tcp://127.0.0.1:40205 Status: Status.closing
-2022-08-26 14:05:49,953 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-a2c19a29-e384-43fe-abd5-df9f4e666373 Address tcp://127.0.0.1:41089 Status: Status.closing
-2022-08-26 14:05:49,954 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:05:49,954 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:05:50,154 - distributed.utils_perf - WARNING - full garbage collections took 57% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_dask_collections.py::test_dataframe_set_index_sync[wait] 2022-08-26 14:05:51,003 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:05:51,006 - distributed.scheduler - INFO - State start
-2022-08-26 14:05:51,009 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:05:51,010 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:32905
-2022-08-26 14:05:51,010 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:05:51,018 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33593
-2022-08-26 14:05:51,018 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33593
-2022-08-26 14:05:51,018 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46391
-2022-08-26 14:05:51,018 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:32905
-2022-08-26 14:05:51,018 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:51,018 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:05:51,018 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:05:51,018 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-vq0uy8yg
-2022-08-26 14:05:51,019 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:51,059 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:46363
-2022-08-26 14:05:51,059 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:46363
-2022-08-26 14:05:51,059 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33381
-2022-08-26 14:05:51,059 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:32905
-2022-08-26 14:05:51,059 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:51,059 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:05:51,059 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:05:51,059 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-7cei2zu7
-2022-08-26 14:05:51,059 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:51,300 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33593', status: init, memory: 0, processing: 0>
-2022-08-26 14:05:51,556 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33593
-2022-08-26 14:05:51,556 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:51,556 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:32905
-2022-08-26 14:05:51,556 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:51,557 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:46363', status: init, memory: 0, processing: 0>
-2022-08-26 14:05:51,557 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:46363
-2022-08-26 14:05:51,557 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:51,557 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:51,557 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:32905
-2022-08-26 14:05:51,558 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:51,558 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:51,563 - distributed.scheduler - INFO - Receive client connection: Client-ddce2dbc-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:51,564 - distributed.core - INFO - Starting established connection
-PASSED2022-08-26 14:05:52,015 - distributed.scheduler - INFO - Remove client Client-ddce2dbc-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:52,018 - distributed.scheduler - INFO - Remove client Client-ddce2dbc-2582-11ed-a99d-00d861bc4509
-
-distributed/tests/test_dask_collections.py::test_dataframe_set_index_sync[<lambda>] 2022-08-26 14:05:52,869 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:05:52,872 - distributed.scheduler - INFO - State start
-2022-08-26 14:05:52,875 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:05:52,875 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:40477
-2022-08-26 14:05:52,875 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:05:52,888 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-vq0uy8yg', purging
-2022-08-26 14:05:52,888 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-7cei2zu7', purging
-2022-08-26 14:05:52,894 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40355
-2022-08-26 14:05:52,895 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40355
-2022-08-26 14:05:52,895 - distributed.worker - INFO -          dashboard at:            127.0.0.1:37165
-2022-08-26 14:05:52,895 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:40477
-2022-08-26 14:05:52,895 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:52,895 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:05:52,895 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:05:52,895 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-x0b35tst
-2022-08-26 14:05:52,895 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:52,940 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33479
-2022-08-26 14:05:52,940 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33479
-2022-08-26 14:05:52,940 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36093
-2022-08-26 14:05:52,940 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:40477
-2022-08-26 14:05:52,940 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:52,940 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:05:52,940 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:05:52,940 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-lc4dfhp0
-2022-08-26 14:05:52,940 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:53,174 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40355', status: init, memory: 0, processing: 0>
-2022-08-26 14:05:53,427 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40355
-2022-08-26 14:05:53,427 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:53,427 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:40477
-2022-08-26 14:05:53,428 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:53,428 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33479', status: init, memory: 0, processing: 0>
-2022-08-26 14:05:53,428 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33479
-2022-08-26 14:05:53,429 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:53,429 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:53,429 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:40477
-2022-08-26 14:05:53,429 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:53,429 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:53,435 - distributed.scheduler - INFO - Receive client connection: Client-deebb72e-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:53,435 - distributed.core - INFO - Starting established connection
-PASSED2022-08-26 14:05:53,910 - distributed.scheduler - INFO - Remove client Client-deebb72e-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:53,913 - distributed.scheduler - INFO - Remove client Client-deebb72e-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:53,914 - distributed.scheduler - INFO - Close client connection: Client-deebb72e-2582-11ed-a99d-00d861bc4509
-
-distributed/tests/test_dask_collections.py::test_loc_sync 2022-08-26 14:05:54,768 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:05:54,771 - distributed.scheduler - INFO - State start
-2022-08-26 14:05:54,774 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:05:54,774 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:44275
-2022-08-26 14:05:54,774 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:05:54,781 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-lc4dfhp0', purging
-2022-08-26 14:05:54,781 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-x0b35tst', purging
-2022-08-26 14:05:54,788 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43577
-2022-08-26 14:05:54,788 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43577
-2022-08-26 14:05:54,788 - distributed.worker - INFO -          dashboard at:            127.0.0.1:37717
-2022-08-26 14:05:54,788 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44275
-2022-08-26 14:05:54,788 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:54,788 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:05:54,788 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:05:54,788 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-mpcswd67
-2022-08-26 14:05:54,788 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:54,836 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34037
-2022-08-26 14:05:54,836 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34037
-2022-08-26 14:05:54,836 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42725
-2022-08-26 14:05:54,836 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44275
-2022-08-26 14:05:54,836 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:54,836 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:05:54,836 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:05:54,836 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-kxwmbnbm
-2022-08-26 14:05:54,836 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:55,065 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:43577', status: init, memory: 0, processing: 0>
-2022-08-26 14:05:55,320 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:43577
-2022-08-26 14:05:55,320 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:55,320 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44275
-2022-08-26 14:05:55,320 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:55,321 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:34037', status: init, memory: 0, processing: 0>
-2022-08-26 14:05:55,321 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:34037
-2022-08-26 14:05:55,321 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:55,321 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:55,321 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44275
-2022-08-26 14:05:55,321 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:55,322 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:55,327 - distributed.scheduler - INFO - Receive client connection: Client-e00c826e-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:55,328 - distributed.core - INFO - Starting established connection
-PASSED2022-08-26 14:05:55,467 - distributed.scheduler - INFO - Remove client Client-e00c826e-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:55,467 - distributed.scheduler - INFO - Remove client Client-e00c826e-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:55,467 - distributed.scheduler - INFO - Close client connection: Client-e00c826e-2582-11ed-a99d-00d861bc4509
-
-distributed/tests/test_dask_collections.py::test_rolling_sync 2022-08-26 14:05:56,318 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:05:56,321 - distributed.scheduler - INFO - State start
-2022-08-26 14:05:56,323 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:05:56,324 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:35709
-2022-08-26 14:05:56,324 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:05:56,337 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-kxwmbnbm', purging
-2022-08-26 14:05:56,338 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-mpcswd67', purging
-2022-08-26 14:05:56,344 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43891
-2022-08-26 14:05:56,344 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43891
-2022-08-26 14:05:56,344 - distributed.worker - INFO -          dashboard at:            127.0.0.1:38045
-2022-08-26 14:05:56,344 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35709
-2022-08-26 14:05:56,344 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:56,344 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:05:56,344 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:05:56,345 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-rcnu9720
-2022-08-26 14:05:56,345 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:56,379 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37463
-2022-08-26 14:05:56,379 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37463
-2022-08-26 14:05:56,380 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43779
-2022-08-26 14:05:56,380 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35709
-2022-08-26 14:05:56,380 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:56,380 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:05:56,380 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:05:56,380 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-8r_banjc
-2022-08-26 14:05:56,380 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:56,623 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:43891', status: init, memory: 0, processing: 0>
-2022-08-26 14:05:56,878 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:43891
-2022-08-26 14:05:56,878 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:56,878 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35709
-2022-08-26 14:05:56,878 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:56,879 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37463', status: init, memory: 0, processing: 0>
-2022-08-26 14:05:56,879 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37463
-2022-08-26 14:05:56,879 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:56,879 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:56,879 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35709
-2022-08-26 14:05:56,879 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:56,880 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:56,885 - distributed.scheduler - INFO - Receive client connection: Client-e0fa4a31-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:56,886 - distributed.core - INFO - Starting established connection
-PASSED2022-08-26 14:05:57,074 - distributed.scheduler - INFO - Remove client Client-e0fa4a31-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:57,074 - distributed.scheduler - INFO - Remove client Client-e0fa4a31-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:57,075 - distributed.scheduler - INFO - Close client connection: Client-e0fa4a31-2582-11ed-a99d-00d861bc4509
-
-distributed/tests/test_dask_collections.py::test_loc 2022-08-26 14:05:57,087 - distributed.scheduler - INFO - State start
-2022-08-26 14:05:57,089 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:05:57,089 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:36267
-2022-08-26 14:05:57,089 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:42075
-2022-08-26 14:05:57,090 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-8r_banjc', purging
-2022-08-26 14:05:57,090 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-rcnu9720', purging
-2022-08-26 14:05:57,094 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33969
-2022-08-26 14:05:57,094 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33969
-2022-08-26 14:05:57,094 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:05:57,094 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40875
-2022-08-26 14:05:57,094 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36267
-2022-08-26 14:05:57,095 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:57,095 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:05:57,095 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:05:57,095 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-7ksghkwx
-2022-08-26 14:05:57,095 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:57,095 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35353
-2022-08-26 14:05:57,095 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35353
-2022-08-26 14:05:57,095 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:05:57,095 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43507
-2022-08-26 14:05:57,095 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36267
-2022-08-26 14:05:57,095 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:57,096 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:05:57,096 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:05:57,096 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-kq5qzpxe
-2022-08-26 14:05:57,096 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:57,098 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33969', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:05:57,099 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33969
-2022-08-26 14:05:57,099 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:57,099 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:35353', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:05:57,099 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:35353
-2022-08-26 14:05:57,100 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:57,100 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36267
-2022-08-26 14:05:57,100 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:57,100 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36267
-2022-08-26 14:05:57,100 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:57,100 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:57,100 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:57,114 - distributed.scheduler - INFO - Receive client connection: Client-e11d33ed-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:57,114 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:57,161 - distributed.scheduler - INFO - Remove client Client-e11d33ed-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:57,161 - distributed.scheduler - INFO - Remove client Client-e11d33ed-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:57,161 - distributed.scheduler - INFO - Close client connection: Client-e11d33ed-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:57,161 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:33969
-2022-08-26 14:05:57,162 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:35353
-2022-08-26 14:05:57,163 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:33969', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:05:57,163 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:33969
-2022-08-26 14:05:57,163 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:35353', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:05:57,163 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:35353
-2022-08-26 14:05:57,163 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:05:57,163 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-b7adc58d-81f6-4e1c-bdb0-aeff8184cf41 Address tcp://127.0.0.1:33969 Status: Status.closing
-2022-08-26 14:05:57,164 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-b5ed97c7-46f8-4dfb-ab34-f7906f1a223b Address tcp://127.0.0.1:35353 Status: Status.closing
-2022-08-26 14:05:57,165 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:05:57,165 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:05:57,364 - distributed.utils_perf - WARNING - full garbage collections took 56% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_dask_collections.py::test_dataframe_groupby_tasks 2022-08-26 14:05:58,211 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:05:58,214 - distributed.scheduler - INFO - State start
-2022-08-26 14:05:58,217 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:05:58,217 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:37285
-2022-08-26 14:05:58,217 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:05:58,241 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:46059
-2022-08-26 14:05:58,241 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:46059
-2022-08-26 14:05:58,241 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42993
-2022-08-26 14:05:58,241 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37285
-2022-08-26 14:05:58,241 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:58,241 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:05:58,241 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:05:58,241 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-edwixjp5
-2022-08-26 14:05:58,241 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:58,271 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42829
-2022-08-26 14:05:58,271 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42829
-2022-08-26 14:05:58,271 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40249
-2022-08-26 14:05:58,271 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37285
-2022-08-26 14:05:58,271 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:58,271 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:05:58,271 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:05:58,271 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-95imvh1_
-2022-08-26 14:05:58,271 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:58,536 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:46059', status: init, memory: 0, processing: 0>
-2022-08-26 14:05:58,792 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:46059
-2022-08-26 14:05:58,793 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:58,793 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37285
-2022-08-26 14:05:58,793 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:58,793 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42829', status: init, memory: 0, processing: 0>
-2022-08-26 14:05:58,794 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:58,794 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42829
-2022-08-26 14:05:58,794 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:58,794 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37285
-2022-08-26 14:05:58,794 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:58,795 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:58,800 - distributed.scheduler - INFO - Receive client connection: Client-e21e6428-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:58,800 - distributed.core - INFO - Starting established connection
-PASSED2022-08-26 14:05:59,086 - distributed.scheduler - INFO - Remove client Client-e21e6428-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:59,086 - distributed.scheduler - INFO - Remove client Client-e21e6428-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:59,087 - distributed.scheduler - INFO - Close client connection: Client-e21e6428-2582-11ed-a99d-00d861bc4509
-
-distributed/tests/test_dask_collections.py::test_sparse_arrays 2022-08-26 14:05:59,100 - distributed.scheduler - INFO - State start
-2022-08-26 14:05:59,102 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:05:59,102 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:39399
-2022-08-26 14:05:59,102 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:37089
-2022-08-26 14:05:59,102 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-95imvh1_', purging
-2022-08-26 14:05:59,103 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-edwixjp5', purging
-2022-08-26 14:05:59,107 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42225
-2022-08-26 14:05:59,107 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42225
-2022-08-26 14:05:59,107 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:05:59,107 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39345
-2022-08-26 14:05:59,107 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39399
-2022-08-26 14:05:59,107 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:59,107 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:05:59,107 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:05:59,107 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-1ctd_yro
-2022-08-26 14:05:59,107 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:59,108 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38923
-2022-08-26 14:05:59,108 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38923
-2022-08-26 14:05:59,108 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:05:59,108 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45547
-2022-08-26 14:05:59,108 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39399
-2022-08-26 14:05:59,108 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:59,108 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:05:59,108 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:05:59,108 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-pyb3z8fr
-2022-08-26 14:05:59,108 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:59,111 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42225', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:05:59,112 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42225
-2022-08-26 14:05:59,112 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:59,112 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38923', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:05:59,112 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38923
-2022-08-26 14:05:59,112 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:59,113 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39399
-2022-08-26 14:05:59,113 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:59,113 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39399
-2022-08-26 14:05:59,113 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:59,113 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:59,113 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:59,127 - distributed.scheduler - INFO - Receive client connection: Client-e2505d34-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:59,127 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:59,139 - distributed.scheduler - INFO - Remove client Client-e2505d34-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:59,139 - distributed.scheduler - INFO - Remove client Client-e2505d34-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:59,139 - distributed.scheduler - INFO - Close client connection: Client-e2505d34-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:59,140 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42225
-2022-08-26 14:05:59,140 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:38923
-2022-08-26 14:05:59,141 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:42225', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:05:59,141 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:42225
-2022-08-26 14:05:59,141 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:38923', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:05:59,141 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:38923
-2022-08-26 14:05:59,141 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:05:59,141 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-fc9aaf58-7503-436e-bc50-e0d8ceb252df Address tcp://127.0.0.1:42225 Status: Status.closing
-2022-08-26 14:05:59,142 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-2fdb1688-52c8-4c11-82a1-c674162fad67 Address tcp://127.0.0.1:38923 Status: Status.closing
-2022-08-26 14:05:59,143 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:05:59,143 - distributed.scheduler - INFO - Scheduler closing all comms
-SKIPPED
-distributed/tests/test_dask_collections.py::test_delayed_none 2022-08-26 14:05:59,148 - distributed.scheduler - INFO - State start
-2022-08-26 14:05:59,150 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:05:59,150 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:38173
-2022-08-26 14:05:59,150 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:39855
-2022-08-26 14:05:59,153 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40227
-2022-08-26 14:05:59,153 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40227
-2022-08-26 14:05:59,153 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:05:59,153 - distributed.worker - INFO -          dashboard at:            127.0.0.1:32981
-2022-08-26 14:05:59,153 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:38173
-2022-08-26 14:05:59,153 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:59,153 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:05:59,153 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:05:59,153 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ux94bwo8
-2022-08-26 14:05:59,153 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:59,155 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40227', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:05:59,155 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40227
-2022-08-26 14:05:59,156 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:59,156 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:38173
-2022-08-26 14:05:59,156 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:05:59,156 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:59,169 - distributed.scheduler - INFO - Receive client connection: Client-e256d34f-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:59,170 - distributed.core - INFO - Starting established connection
-2022-08-26 14:05:59,194 - distributed.scheduler - INFO - Remove client Client-e256d34f-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:59,195 - distributed.scheduler - INFO - Remove client Client-e256d34f-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:59,195 - distributed.scheduler - INFO - Close client connection: Client-e256d34f-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:05:59,196 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:40227
-2022-08-26 14:05:59,197 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-2d034fcc-764e-4385-a075-1052b5e002ab Address tcp://127.0.0.1:40227 Status: Status.closing
-2022-08-26 14:05:59,197 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:40227', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:05:59,197 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:40227
-2022-08-26 14:05:59,197 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:05:59,198 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:05:59,198 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:05:59,396 - distributed.utils_perf - WARNING - full garbage collections took 55% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_dask_collections.py::test_tuple_futures_arg[tuple] 2022-08-26 14:06:00,250 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:06:00,252 - distributed.scheduler - INFO - State start
-2022-08-26 14:06:00,255 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:06:00,255 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:38471
-2022-08-26 14:06:00,255 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:06:00,265 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:44793
-2022-08-26 14:06:00,265 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:44793
-2022-08-26 14:06:00,265 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45423
-2022-08-26 14:06:00,265 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:38471
-2022-08-26 14:06:00,265 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:00,265 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:06:00,265 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:06:00,265 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ueuo8j64
-2022-08-26 14:06:00,265 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:00,302 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40981
-2022-08-26 14:06:00,302 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40981
-2022-08-26 14:06:00,302 - distributed.worker - INFO -          dashboard at:            127.0.0.1:38635
-2022-08-26 14:06:00,302 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:38471
-2022-08-26 14:06:00,302 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:00,302 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:06:00,302 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:06:00,302 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-h4ieo8rt
-2022-08-26 14:06:00,302 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:00,541 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:44793', status: init, memory: 0, processing: 0>
-2022-08-26 14:06:00,795 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:44793
-2022-08-26 14:06:00,795 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:00,795 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:38471
-2022-08-26 14:06:00,796 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:00,796 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40981', status: init, memory: 0, processing: 0>
-2022-08-26 14:06:00,796 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40981
-2022-08-26 14:06:00,796 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:00,796 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:00,796 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:38471
-2022-08-26 14:06:00,797 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:00,797 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:00,802 - distributed.scheduler - INFO - Receive client connection: Client-e34ff6ff-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:00,803 - distributed.core - INFO - Starting established connection
-PASSED2022-08-26 14:06:00,944 - distributed.scheduler - INFO - Remove client Client-e34ff6ff-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:00,944 - distributed.scheduler - INFO - Remove client Client-e34ff6ff-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:00,944 - distributed.scheduler - INFO - Close client connection: Client-e34ff6ff-2582-11ed-a99d-00d861bc4509
-
-distributed/tests/test_dask_collections.py::test_tuple_futures_arg[list] 2022-08-26 14:06:01,807 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:06:01,810 - distributed.scheduler - INFO - State start
-2022-08-26 14:06:01,813 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:06:01,813 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:34261
-2022-08-26 14:06:01,813 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:06:01,815 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-ueuo8j64', purging
-2022-08-26 14:06:01,815 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-h4ieo8rt', purging
-2022-08-26 14:06:01,822 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39659
-2022-08-26 14:06:01,822 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39659
-2022-08-26 14:06:01,822 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45497
-2022-08-26 14:06:01,822 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34261
-2022-08-26 14:06:01,822 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:01,822 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:06:01,822 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:06:01,822 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-1722auhp
-2022-08-26 14:06:01,822 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:01,860 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33853
-2022-08-26 14:06:01,860 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33853
-2022-08-26 14:06:01,860 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39147
-2022-08-26 14:06:01,860 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34261
-2022-08-26 14:06:01,860 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:01,860 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:06:01,860 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:06:01,860 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-i2g_nv13
-2022-08-26 14:06:01,860 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:02,102 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39659', status: init, memory: 0, processing: 0>
-2022-08-26 14:06:02,358 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39659
-2022-08-26 14:06:02,358 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:02,358 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34261
-2022-08-26 14:06:02,358 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:02,359 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33853', status: init, memory: 0, processing: 0>
-2022-08-26 14:06:02,359 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33853
-2022-08-26 14:06:02,359 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:02,359 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:02,359 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34261
-2022-08-26 14:06:02,359 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:02,360 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:02,365 - distributed.scheduler - INFO - Receive client connection: Client-e43e714f-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:02,366 - distributed.core - INFO - Starting established connection
-PASSED2022-08-26 14:06:02,504 - distributed.scheduler - INFO - Remove client Client-e43e714f-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:02,504 - distributed.scheduler - INFO - Remove client Client-e43e714f-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:02,504 - distributed.scheduler - INFO - Close client connection: Client-e43e714f-2582-11ed-a99d-00d861bc4509
-
-distributed/tests/test_diskutils.py::test_workdir_simple 2022-08-26 14:06:02,712 - distributed.utils_perf - WARNING - full garbage collections took 55% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_diskutils.py::test_two_workspaces_in_same_directory 2022-08-26 14:06:02,911 - distributed.utils_perf - WARNING - full garbage collections took 55% CPU time recently (threshold: 10%)
-2022-08-26 14:06:03,107 - distributed.utils_perf - WARNING - full garbage collections took 55% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_diskutils.py::test_workspace_process_crash PASSED
-distributed/tests/test_diskutils.py::test_workspace_rmtree_failure PASSED
-distributed/tests/test_diskutils.py::test_locking_disabled 2022-08-26 14:06:03,673 - distributed.utils_perf - WARNING - full garbage collections took 55% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_diskutils.py::test_workspace_concurrency SKIPPED
-distributed/tests/test_events.py::test_event_on_workers 2022-08-26 14:06:03,679 - distributed.scheduler - INFO - State start
-2022-08-26 14:06:03,681 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:06:03,681 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:44167
-2022-08-26 14:06:03,681 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:34183
-2022-08-26 14:06:03,682 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-i2g_nv13', purging
-2022-08-26 14:06:03,682 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-1722auhp', purging
-2022-08-26 14:06:03,686 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37235
-2022-08-26 14:06:03,686 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37235
-2022-08-26 14:06:03,686 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:06:03,686 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35905
-2022-08-26 14:06:03,686 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44167
-2022-08-26 14:06:03,686 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:03,686 - distributed.worker - INFO -               Threads:                          8
-2022-08-26 14:06:03,686 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:06:03,686 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ngbsnnzp
-2022-08-26 14:06:03,686 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:03,687 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43129
-2022-08-26 14:06:03,687 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43129
-2022-08-26 14:06:03,687 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:06:03,687 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41733
-2022-08-26 14:06:03,687 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44167
-2022-08-26 14:06:03,687 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:03,687 - distributed.worker - INFO -               Threads:                          8
-2022-08-26 14:06:03,687 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:06:03,687 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-t2wswdqi
-2022-08-26 14:06:03,687 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:03,690 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37235', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:06:03,690 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37235
-2022-08-26 14:06:03,690 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:03,691 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:43129', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:06:03,691 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:43129
-2022-08-26 14:06:03,691 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:03,691 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44167
-2022-08-26 14:06:03,691 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:03,692 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44167
-2022-08-26 14:06:03,692 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:03,692 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:03,692 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:03,706 - distributed.scheduler - INFO - Receive client connection: Client-e50afe8c-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:03,706 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:03,847 - distributed.scheduler - INFO - Remove client Client-e50afe8c-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:03,848 - distributed.scheduler - INFO - Remove client Client-e50afe8c-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:03,848 - distributed.batched - INFO - Batched Comm Closed <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:44167 remote=tcp://127.0.0.1:39954>
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/batched.py", line 115, in _background_send
-    nbytes = yield coro
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/tornado/gen.py", line 769, in run
-    value = future.result()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 269, in write
-    raise CommClosedError()
-distributed.comm.core.CommClosedError
-2022-08-26 14:06:03,849 - distributed.scheduler - INFO - Close client connection: Client-e50afe8c-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:03,851 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:37235
-2022-08-26 14:06:03,852 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:43129
-2022-08-26 14:06:03,853 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:37235', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:06:03,853 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:37235
-2022-08-26 14:06:03,853 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:43129', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:06:03,853 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:43129
-2022-08-26 14:06:03,853 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:06:03,853 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-42d725dd-b105-4c56-8250-04a5d72fe998 Address tcp://127.0.0.1:37235 Status: Status.closing
-2022-08-26 14:06:03,853 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-e823775b-56e4-465d-a0db-45df75cca41c Address tcp://127.0.0.1:43129 Status: Status.closing
-2022-08-26 14:06:03,855 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:06:03,855 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:06:04,054 - distributed.utils_perf - WARNING - full garbage collections took 55% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_events.py::test_default_event 2022-08-26 14:06:04,060 - distributed.scheduler - INFO - State start
-2022-08-26 14:06:04,061 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:06:04,062 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:42317
-2022-08-26 14:06:04,062 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:45945
-2022-08-26 14:06:04,066 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:44333
-2022-08-26 14:06:04,066 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:44333
-2022-08-26 14:06:04,066 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:06:04,066 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42871
-2022-08-26 14:06:04,066 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:42317
-2022-08-26 14:06:04,066 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:04,066 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:06:04,066 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:06:04,066 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-g5qgyr9_
-2022-08-26 14:06:04,067 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:04,067 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45355
-2022-08-26 14:06:04,067 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45355
-2022-08-26 14:06:04,067 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:06:04,067 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45303
-2022-08-26 14:06:04,067 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:42317
-2022-08-26 14:06:04,067 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:04,067 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:06:04,067 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:06:04,067 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-q6uy0pv7
-2022-08-26 14:06:04,067 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:04,070 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:44333', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:06:04,071 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:44333
-2022-08-26 14:06:04,071 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:04,071 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45355', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:06:04,071 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45355
-2022-08-26 14:06:04,071 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:04,072 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:42317
-2022-08-26 14:06:04,072 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:04,072 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:42317
-2022-08-26 14:06:04,072 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:04,072 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:04,072 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:04,086 - distributed.scheduler - INFO - Receive client connection: Client-e5450249-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:04,086 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:04,097 - distributed.scheduler - INFO - Remove client Client-e5450249-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:04,097 - distributed.scheduler - INFO - Remove client Client-e5450249-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:04,097 - distributed.scheduler - INFO - Close client connection: Client-e5450249-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:04,098 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:44333
-2022-08-26 14:06:04,098 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:45355
-2022-08-26 14:06:04,099 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:44333', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:06:04,099 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:44333
-2022-08-26 14:06:04,099 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:45355', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:06:04,099 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:45355
-2022-08-26 14:06:04,099 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:06:04,099 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-ed172128-3651-421e-8d85-dfbafe95a46b Address tcp://127.0.0.1:44333 Status: Status.closing
-2022-08-26 14:06:04,100 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-9f0dde8f-adbf-4703-becc-864a6d773af9 Address tcp://127.0.0.1:45355 Status: Status.closing
-2022-08-26 14:06:04,100 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:06:04,101 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:06:04,300 - distributed.utils_perf - WARNING - full garbage collections took 55% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_events.py::test_set_not_set 2022-08-26 14:06:04,306 - distributed.scheduler - INFO - State start
-2022-08-26 14:06:04,307 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:06:04,308 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:44521
-2022-08-26 14:06:04,308 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:33369
-2022-08-26 14:06:04,312 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39983
-2022-08-26 14:06:04,312 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39983
-2022-08-26 14:06:04,312 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:06:04,312 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42993
-2022-08-26 14:06:04,312 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44521
-2022-08-26 14:06:04,312 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:04,312 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:06:04,312 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:06:04,312 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-rajbcz2s
-2022-08-26 14:06:04,313 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:04,313 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43681
-2022-08-26 14:06:04,313 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43681
-2022-08-26 14:06:04,313 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:06:04,313 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46281
-2022-08-26 14:06:04,313 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44521
-2022-08-26 14:06:04,313 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:04,313 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:06:04,313 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:06:04,313 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-14befq7e
-2022-08-26 14:06:04,313 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:04,316 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39983', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:06:04,316 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39983
-2022-08-26 14:06:04,317 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:04,317 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:43681', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:06:04,317 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:43681
-2022-08-26 14:06:04,317 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:04,317 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44521
-2022-08-26 14:06:04,318 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:04,318 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44521
-2022-08-26 14:06:04,318 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:04,318 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:04,318 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:04,332 - distributed.scheduler - INFO - Receive client connection: Client-e56a88f3-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:04,332 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:04,343 - distributed.scheduler - INFO - Remove client Client-e56a88f3-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:04,343 - distributed.scheduler - INFO - Remove client Client-e56a88f3-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:04,343 - distributed.scheduler - INFO - Close client connection: Client-e56a88f3-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:04,344 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:39983
-2022-08-26 14:06:04,344 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:43681
-2022-08-26 14:06:04,345 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:39983', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:06:04,345 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:39983
-2022-08-26 14:06:04,345 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:43681', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:06:04,345 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:43681
-2022-08-26 14:06:04,345 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:06:04,345 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-ce2456c1-5a0c-4515-8cf8-0524060cc5bf Address tcp://127.0.0.1:39983 Status: Status.closing
-2022-08-26 14:06:04,346 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-50f565fe-f280-4cbf-ab03-83d2f4cce1d6 Address tcp://127.0.0.1:43681 Status: Status.closing
-2022-08-26 14:06:04,347 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:06:04,347 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:06:04,546 - distributed.utils_perf - WARNING - full garbage collections took 55% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_events.py::test_set_not_set_many_events 2022-08-26 14:06:04,552 - distributed.scheduler - INFO - State start
-2022-08-26 14:06:04,553 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:06:04,553 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:42685
-2022-08-26 14:06:04,553 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:44515
-2022-08-26 14:06:04,558 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35261
-2022-08-26 14:06:04,558 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35261
-2022-08-26 14:06:04,558 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:06:04,558 - distributed.worker - INFO -          dashboard at:            127.0.0.1:38499
-2022-08-26 14:06:04,558 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:42685
-2022-08-26 14:06:04,558 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:04,558 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:06:04,558 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:06:04,558 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-fmnn3d7f
-2022-08-26 14:06:04,558 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:04,559 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45407
-2022-08-26 14:06:04,559 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45407
-2022-08-26 14:06:04,559 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:06:04,559 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43711
-2022-08-26 14:06:04,559 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:42685
-2022-08-26 14:06:04,559 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:04,559 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:06:04,559 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:06:04,559 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-m8uozrzj
-2022-08-26 14:06:04,559 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:04,562 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:35261', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:06:04,562 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:35261
-2022-08-26 14:06:04,562 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:04,563 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45407', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:06:04,563 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45407
-2022-08-26 14:06:04,563 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:04,563 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:42685
-2022-08-26 14:06:04,563 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:04,564 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:42685
-2022-08-26 14:06:04,564 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:04,564 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:04,564 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:04,578 - distributed.scheduler - INFO - Receive client connection: Client-e5900cc7-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:04,578 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:04,768 - distributed.scheduler - INFO - Remove client Client-e5900cc7-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:04,768 - distributed.scheduler - INFO - Remove client Client-e5900cc7-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:04,768 - distributed.scheduler - INFO - Close client connection: Client-e5900cc7-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:04,769 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:35261
-2022-08-26 14:06:04,769 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:45407
-2022-08-26 14:06:04,770 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:35261', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:06:04,770 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:35261
-2022-08-26 14:06:04,770 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:45407', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:06:04,770 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:45407
-2022-08-26 14:06:04,770 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:06:04,771 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-69c22d9f-9cf7-45be-a938-a9ee93c74b8f Address tcp://127.0.0.1:35261 Status: Status.closing
-2022-08-26 14:06:04,771 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-56c71f0f-9465-46e4-8f87-4f92d315abf4 Address tcp://127.0.0.1:45407 Status: Status.closing
-2022-08-26 14:06:04,772 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:06:04,772 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:06:04,970 - distributed.utils_perf - WARNING - full garbage collections took 54% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_events.py::test_timeout 2022-08-26 14:06:04,976 - distributed.scheduler - INFO - State start
-2022-08-26 14:06:04,977 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:06:04,978 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:42511
-2022-08-26 14:06:04,978 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:44387
-2022-08-26 14:06:04,982 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33677
-2022-08-26 14:06:04,982 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33677
-2022-08-26 14:06:04,982 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:06:04,982 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43297
-2022-08-26 14:06:04,982 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:42511
-2022-08-26 14:06:04,982 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:04,982 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:06:04,982 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:06:04,983 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-l9lra82t
-2022-08-26 14:06:04,983 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:04,983 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36437
-2022-08-26 14:06:04,983 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36437
-2022-08-26 14:06:04,983 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:06:04,983 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33239
-2022-08-26 14:06:04,983 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:42511
-2022-08-26 14:06:04,983 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:04,983 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:06:04,983 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:06:04,983 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ndfq30u7
-2022-08-26 14:06:04,984 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:04,986 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33677', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:06:04,986 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33677
-2022-08-26 14:06:04,987 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:04,987 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36437', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:06:04,987 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36437
-2022-08-26 14:06:04,987 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:04,987 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:42511
-2022-08-26 14:06:04,988 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:04,988 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:42511
-2022-08-26 14:06:04,988 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:04,988 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:04,988 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:05,002 - distributed.scheduler - INFO - Receive client connection: Client-e5d0c4b8-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:05,002 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:05,206 - distributed.scheduler - INFO - Remove client Client-e5d0c4b8-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:05,207 - distributed.scheduler - INFO - Remove client Client-e5d0c4b8-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:05,207 - distributed.scheduler - INFO - Close client connection: Client-e5d0c4b8-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:05,207 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:33677
-2022-08-26 14:06:05,207 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36437
-2022-08-26 14:06:05,208 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:33677', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:06:05,208 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:33677
-2022-08-26 14:06:05,208 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:36437', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:06:05,209 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:36437
-2022-08-26 14:06:05,209 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:06:05,209 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-7d38480f-1c75-4e6b-b938-108d271a1d22 Address tcp://127.0.0.1:33677 Status: Status.closing
-2022-08-26 14:06:05,209 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-383e1638-c351-4d0c-a4b5-b1e68c144bda Address tcp://127.0.0.1:36437 Status: Status.closing
-2022-08-26 14:06:05,210 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:06:05,210 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:06:05,408 - distributed.utils_perf - WARNING - full garbage collections took 54% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_events.py::test_event_sync 2022-08-26 14:06:06,247 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:06:06,249 - distributed.scheduler - INFO - State start
-2022-08-26 14:06:06,252 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:06:06,252 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:38543
-2022-08-26 14:06:06,252 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:06:06,272 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45881
-2022-08-26 14:06:06,272 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45881
-2022-08-26 14:06:06,272 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45519
-2022-08-26 14:06:06,272 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:38543
-2022-08-26 14:06:06,272 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:06,272 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:06:06,272 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:06:06,272 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ndt6y57z
-2022-08-26 14:06:06,272 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:06,309 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33099
-2022-08-26 14:06:06,309 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33099
-2022-08-26 14:06:06,309 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41237
-2022-08-26 14:06:06,309 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:38543
-2022-08-26 14:06:06,309 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:06,309 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:06:06,309 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:06:06,309 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ucp5bmvj
-2022-08-26 14:06:06,309 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:06,549 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45881', status: init, memory: 0, processing: 0>
-2022-08-26 14:06:06,804 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45881
-2022-08-26 14:06:06,804 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:06,804 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:38543
-2022-08-26 14:06:06,804 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:06,804 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33099', status: init, memory: 0, processing: 0>
-2022-08-26 14:06:06,805 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:06,805 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33099
-2022-08-26 14:06:06,805 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:06,805 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:38543
-2022-08-26 14:06:06,805 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:06,806 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:06,812 - distributed.scheduler - INFO - Receive client connection: Client-e6e4daa2-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:06,812 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:06,830 - distributed.scheduler - INFO - Receive client connection: Client-worker-e6e79f6e-2582-11ed-a359-00d861bc4509
-2022-08-26 14:06:06,830 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:06,832 - distributed.scheduler - INFO - Receive client connection: Client-worker-e6e79cdf-2582-11ed-a35a-00d861bc4509
-2022-08-26 14:06:06,832 - distributed.core - INFO - Starting established connection
-PASSED2022-08-26 14:06:07,146 - distributed.scheduler - INFO - Remove client Client-e6e4daa2-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:07,146 - distributed.scheduler - INFO - Remove client Client-e6e4daa2-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:07,146 - distributed.scheduler - INFO - Close client connection: Client-e6e4daa2-2582-11ed-a99d-00d861bc4509
-
-distributed/tests/test_events.py::test_event_types 2022-08-26 14:06:07,159 - distributed.scheduler - INFO - State start
-2022-08-26 14:06:07,161 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:06:07,161 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:42961
-2022-08-26 14:06:07,161 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:40803
-2022-08-26 14:06:07,161 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-ucp5bmvj', purging
-2022-08-26 14:06:07,162 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-ndt6y57z', purging
-2022-08-26 14:06:07,166 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:44497
-2022-08-26 14:06:07,166 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:44497
-2022-08-26 14:06:07,166 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:06:07,166 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36239
-2022-08-26 14:06:07,166 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:42961
-2022-08-26 14:06:07,166 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:07,166 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:06:07,166 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:06:07,166 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-jwtizlz3
-2022-08-26 14:06:07,166 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:07,167 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:41777
-2022-08-26 14:06:07,167 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:41777
-2022-08-26 14:06:07,167 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:06:07,167 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43103
-2022-08-26 14:06:07,167 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:42961
-2022-08-26 14:06:07,167 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:07,167 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:06:07,167 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:06:07,167 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-qpij9tq5
-2022-08-26 14:06:07,167 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:07,170 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:44497', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:06:07,170 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:44497
-2022-08-26 14:06:07,170 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:07,171 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:41777', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:06:07,171 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:41777
-2022-08-26 14:06:07,171 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:07,171 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:42961
-2022-08-26 14:06:07,171 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:07,171 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:42961
-2022-08-26 14:06:07,171 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:07,172 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:07,172 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:07,185 - distributed.scheduler - INFO - Receive client connection: Client-e71df8eb-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:07,186 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:07,197 - distributed.scheduler - INFO - Remove client Client-e71df8eb-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:07,197 - distributed.scheduler - INFO - Remove client Client-e71df8eb-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:07,198 - distributed.scheduler - INFO - Close client connection: Client-e71df8eb-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:07,198 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:44497
-2022-08-26 14:06:07,198 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:41777
-2022-08-26 14:06:07,199 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:44497', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:06:07,199 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:44497
-2022-08-26 14:06:07,199 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:41777', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:06:07,199 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:41777
-2022-08-26 14:06:07,199 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:06:07,200 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-0fa803ef-14a3-4934-816e-145fb1aaeae7 Address tcp://127.0.0.1:44497 Status: Status.closing
-2022-08-26 14:06:07,200 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-2e251af8-52ab-49c3-a3dd-c67307386f23 Address tcp://127.0.0.1:41777 Status: Status.closing
-2022-08-26 14:06:07,201 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:06:07,201 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:06:07,399 - distributed.utils_perf - WARNING - full garbage collections took 54% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_events.py::test_serializable 2022-08-26 14:06:07,405 - distributed.scheduler - INFO - State start
-2022-08-26 14:06:07,406 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:06:07,406 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:32935
-2022-08-26 14:06:07,406 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:33439
-2022-08-26 14:06:07,411 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40535
-2022-08-26 14:06:07,411 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40535
-2022-08-26 14:06:07,411 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:06:07,411 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45771
-2022-08-26 14:06:07,411 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:32935
-2022-08-26 14:06:07,411 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:07,411 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:06:07,411 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:06:07,411 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-f4x6bq5j
-2022-08-26 14:06:07,411 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:07,412 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45447
-2022-08-26 14:06:07,412 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45447
-2022-08-26 14:06:07,412 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:06:07,412 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33473
-2022-08-26 14:06:07,412 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:32935
-2022-08-26 14:06:07,412 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:07,412 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:06:07,412 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:06:07,412 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ihxo9roi
-2022-08-26 14:06:07,412 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:07,415 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40535', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:06:07,415 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40535
-2022-08-26 14:06:07,415 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:07,416 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45447', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:06:07,416 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45447
-2022-08-26 14:06:07,416 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:07,416 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:32935
-2022-08-26 14:06:07,416 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:07,417 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:32935
-2022-08-26 14:06:07,417 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:07,417 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:07,417 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:07,431 - distributed.scheduler - INFO - Receive client connection: Client-e74366fe-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:07,431 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:07,473 - distributed.scheduler - INFO - Remove client Client-e74366fe-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:07,474 - distributed.scheduler - INFO - Remove client Client-e74366fe-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:07,474 - distributed.scheduler - INFO - Close client connection: Client-e74366fe-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:07,474 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:40535
-2022-08-26 14:06:07,475 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:45447
-2022-08-26 14:06:07,476 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:40535', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:06:07,476 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:40535
-2022-08-26 14:06:07,476 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:45447', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:06:07,476 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:45447
-2022-08-26 14:06:07,476 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:06:07,476 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-1d3d180c-52f1-4437-9d8e-4e8a2d4c873e Address tcp://127.0.0.1:40535 Status: Status.closing
-2022-08-26 14:06:07,476 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-c14d35cc-a32d-44b1-af41-08c001f99e59 Address tcp://127.0.0.1:45447 Status: Status.closing
-2022-08-26 14:06:07,478 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:06:07,478 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:06:07,676 - distributed.utils_perf - WARNING - full garbage collections took 54% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_events.py::test_two_events_on_workers 2022-08-26 14:06:07,682 - distributed.scheduler - INFO - State start
-2022-08-26 14:06:07,684 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:06:07,684 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:38305
-2022-08-26 14:06:07,684 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:42551
-2022-08-26 14:06:07,688 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43013
-2022-08-26 14:06:07,689 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43013
-2022-08-26 14:06:07,689 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:06:07,689 - distributed.worker - INFO -          dashboard at:            127.0.0.1:44723
-2022-08-26 14:06:07,689 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:38305
-2022-08-26 14:06:07,689 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:07,689 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:06:07,689 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:06:07,689 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-eb2kmkv9
-2022-08-26 14:06:07,689 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:07,689 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34657
-2022-08-26 14:06:07,689 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34657
-2022-08-26 14:06:07,690 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:06:07,690 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33677
-2022-08-26 14:06:07,690 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:38305
-2022-08-26 14:06:07,690 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:07,690 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:06:07,690 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:06:07,690 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-q44bxwpz
-2022-08-26 14:06:07,690 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:07,693 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:43013', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:06:07,693 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:43013
-2022-08-26 14:06:07,693 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:07,693 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:34657', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:06:07,694 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:34657
-2022-08-26 14:06:07,694 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:07,694 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:38305
-2022-08-26 14:06:07,694 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:07,694 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:38305
-2022-08-26 14:06:07,694 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:07,695 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:07,695 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:07,708 - distributed.scheduler - INFO - Receive client connection: Client-e76dc56c-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:07,709 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:08,109 - distributed.scheduler - INFO - Remove client Client-e76dc56c-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:08,109 - distributed.scheduler - INFO - Remove client Client-e76dc56c-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:08,109 - distributed.scheduler - INFO - Close client connection: Client-e76dc56c-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:08,110 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:43013
-2022-08-26 14:06:08,110 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:34657
-2022-08-26 14:06:08,111 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:43013', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:06:08,111 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:43013
-2022-08-26 14:06:08,111 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:34657', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:06:08,111 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:34657
-2022-08-26 14:06:08,112 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:06:08,112 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-654a5666-737c-45a3-b891-5e0ff0c81a46 Address tcp://127.0.0.1:43013 Status: Status.closing
-2022-08-26 14:06:08,112 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-d930b36a-e5a7-4d09-b7fb-75ea7b6762ec Address tcp://127.0.0.1:34657 Status: Status.closing
-2022-08-26 14:06:08,113 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:06:08,114 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:06:08,313 - distributed.utils_perf - WARNING - full garbage collections took 55% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_failed_workers.py::test_submit_after_failed_worker_sync SKIPPED
-distributed/tests/test_failed_workers.py::test_submit_after_failed_worker_async[False] SKIPPED
-distributed/tests/test_failed_workers.py::test_submit_after_failed_worker_async[True] SKIPPED
-distributed/tests/test_failed_workers.py::test_submit_after_failed_worker 2022-08-26 14:06:08,322 - distributed.scheduler - INFO - State start
-2022-08-26 14:06:08,324 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:06:08,324 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:35791
-2022-08-26 14:06:08,324 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:41677
-2022-08-26 14:06:08,328 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39357
-2022-08-26 14:06:08,329 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39357
-2022-08-26 14:06:08,329 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:06:08,329 - distributed.worker - INFO -          dashboard at:            127.0.0.1:38767
-2022-08-26 14:06:08,329 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35791
-2022-08-26 14:06:08,329 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:08,329 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:06:08,329 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:06:08,329 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-03t_8lvu
-2022-08-26 14:06:08,329 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:08,330 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35517
-2022-08-26 14:06:08,330 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35517
-2022-08-26 14:06:08,330 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:06:08,330 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33035
-2022-08-26 14:06:08,330 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35791
-2022-08-26 14:06:08,330 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:08,330 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:06:08,330 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:06:08,330 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-7byqgxak
-2022-08-26 14:06:08,330 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:08,333 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39357', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:06:08,333 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39357
-2022-08-26 14:06:08,333 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:08,334 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:35517', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:06:08,334 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:35517
-2022-08-26 14:06:08,334 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:08,334 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35791
-2022-08-26 14:06:08,334 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:08,335 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35791
-2022-08-26 14:06:08,335 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:08,335 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:08,335 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:08,349 - distributed.scheduler - INFO - Receive client connection: Client-e7cf78be-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:08,349 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:08,369 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:39357
-2022-08-26 14:06:08,370 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-21d4ab48-43ad-447a-9520-a590a31ec4ea Address tcp://127.0.0.1:39357 Status: Status.closing
-2022-08-26 14:06:08,370 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:39357', name: 0, status: closing, memory: 3, processing: 0>
-2022-08-26 14:06:08,370 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:39357
-2022-08-26 14:06:08,385 - distributed.scheduler - INFO - Remove client Client-e7cf78be-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:08,385 - distributed.scheduler - INFO - Remove client Client-e7cf78be-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:08,386 - distributed.scheduler - INFO - Close client connection: Client-e7cf78be-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:08,387 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:35517
-2022-08-26 14:06:08,388 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-35b61c7a-ab7e-4f40-ab76-f40f22e05de6 Address tcp://127.0.0.1:35517 Status: Status.closing
-2022-08-26 14:06:08,388 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:35517', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:06:08,388 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:35517
-2022-08-26 14:06:08,388 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:06:08,389 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:06:08,389 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:06:08,588 - distributed.utils_perf - WARNING - full garbage collections took 55% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_failed_workers.py::test_gather_after_failed_worker SKIPPED
-distributed/tests/test_failed_workers.py::test_gather_then_submit_after_failed_workers SKIPPED
-distributed/tests/test_failed_workers.py::test_restart 2022-08-26 14:06:08,596 - distributed.scheduler - INFO - State start
-2022-08-26 14:06:08,598 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:06:08,598 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:33233
-2022-08-26 14:06:08,598 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:34205
-2022-08-26 14:06:08,603 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:45009'
-2022-08-26 14:06:08,603 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:36761'
-2022-08-26 14:06:09,215 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42225
-2022-08-26 14:06:09,215 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42225
-2022-08-26 14:06:09,215 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:06:09,215 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40103
-2022-08-26 14:06:09,215 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33233
-2022-08-26 14:06:09,215 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:09,215 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:06:09,215 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:06:09,215 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-u6zcn7cm
-2022-08-26 14:06:09,215 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:09,224 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:44279
-2022-08-26 14:06:09,224 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:44279
-2022-08-26 14:06:09,224 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:06:09,224 - distributed.worker - INFO -          dashboard at:            127.0.0.1:38115
-2022-08-26 14:06:09,224 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33233
-2022-08-26 14:06:09,224 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:09,224 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:06:09,224 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:06:09,225 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-1ho962ws
-2022-08-26 14:06:09,225 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:09,472 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:44279', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:06:09,472 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:44279
-2022-08-26 14:06:09,473 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:09,473 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33233
-2022-08-26 14:06:09,473 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:09,473 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:09,480 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42225', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:06:09,481 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42225
-2022-08-26 14:06:09,481 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:09,481 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33233
-2022-08-26 14:06:09,481 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:09,481 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:09,494 - distributed.scheduler - INFO - Receive client connection: Client-e87e27f0-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:09,494 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:09,739 - distributed.scheduler - INFO - Releasing all requested keys
-2022-08-26 14:06:09,739 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:06:09,742 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:06:09,742 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:06:09,742 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42225
-2022-08-26 14:06:09,743 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:44279
-2022-08-26 14:06:09,743 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-0bd57337-120a-45c8-b6ba-00dd14b2ef20 Address tcp://127.0.0.1:42225 Status: Status.closing
-2022-08-26 14:06:09,743 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:42225', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:06:09,743 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:42225
-2022-08-26 14:06:09,816 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-97ec81f0-3476-43ae-aecf-5037d646e2cf Address tcp://127.0.0.1:44279 Status: Status.closing
-2022-08-26 14:06:09,816 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:44279', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:06:09,816 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:44279
-2022-08-26 14:06:09,816 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:06:09,938 - distributed.nanny - WARNING - Restarting worker
-2022-08-26 14:06:10,005 - distributed.nanny - WARNING - Restarting worker
-2022-08-26 14:06:10,553 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45701
-2022-08-26 14:06:10,553 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45701
-2022-08-26 14:06:10,554 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:06:10,554 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34443
-2022-08-26 14:06:10,554 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33233
-2022-08-26 14:06:10,554 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:10,554 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:06:10,554 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:06:10,554 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-zl8qotbi
-2022-08-26 14:06:10,554 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:10,626 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35617
-2022-08-26 14:06:10,626 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35617
-2022-08-26 14:06:10,626 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:06:10,626 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33491
-2022-08-26 14:06:10,626 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33233
-2022-08-26 14:06:10,626 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:10,626 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:06:10,626 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:06:10,626 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-p3zjllnk
-2022-08-26 14:06:10,626 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:10,818 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45701', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:06:10,818 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45701
-2022-08-26 14:06:10,818 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:10,818 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33233
-2022-08-26 14:06:10,818 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:10,819 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:10,875 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:35617', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:06:10,875 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:35617
-2022-08-26 14:06:10,876 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:10,876 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33233
-2022-08-26 14:06:10,876 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:10,876 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:11,012 - distributed.scheduler - INFO - Remove client Client-e87e27f0-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:11,012 - distributed.scheduler - INFO - Remove client Client-e87e27f0-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:11,012 - distributed.scheduler - INFO - Close client connection: Client-e87e27f0-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:11,013 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:45009'.
-2022-08-26 14:06:11,013 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:06:11,013 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:36761'.
-2022-08-26 14:06:11,013 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:06:11,013 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:45701
-2022-08-26 14:06:11,013 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:35617
-2022-08-26 14:06:11,014 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-0b3f8f68-6169-4f92-a127-9476246bac3e Address tcp://127.0.0.1:45701 Status: Status.closing
-2022-08-26 14:06:11,014 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:45701', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:06:11,014 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:45701
-2022-08-26 14:06:11,014 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-537ef7bb-c80a-4d96-899d-4753e2c05dcb Address tcp://127.0.0.1:35617 Status: Status.closing
-2022-08-26 14:06:11,014 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:35617', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:06:11,014 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:35617
-2022-08-26 14:06:11,014 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:06:11,145 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:06:11,146 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:06:11,345 - distributed.utils_perf - WARNING - full garbage collections took 55% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_failed_workers.py::test_restart_cleared 2022-08-26 14:06:11,351 - distributed.scheduler - INFO - State start
-2022-08-26 14:06:11,352 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:06:11,353 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:37843
-2022-08-26 14:06:11,353 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:37165
-2022-08-26 14:06:11,358 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:46187'
-2022-08-26 14:06:11,358 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:35235'
-2022-08-26 14:06:11,973 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36149
-2022-08-26 14:06:11,973 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36149
-2022-08-26 14:06:11,973 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36075
-2022-08-26 14:06:11,973 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:06:11,973 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36075
-2022-08-26 14:06:11,974 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:06:11,974 - distributed.worker - INFO -          dashboard at:            127.0.0.1:37779
-2022-08-26 14:06:11,974 - distributed.worker - INFO -          dashboard at:            127.0.0.1:37147
-2022-08-26 14:06:11,974 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37843
-2022-08-26 14:06:11,974 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37843
-2022-08-26 14:06:11,974 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:11,974 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:11,974 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:06:11,974 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:06:11,974 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:06:11,974 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:06:11,974 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-sf25m0qn
-2022-08-26 14:06:11,974 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-d1q_b0ns
-2022-08-26 14:06:11,974 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:11,974 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:12,241 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36149', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:06:12,241 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36149
-2022-08-26 14:06:12,241 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:12,242 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37843
-2022-08-26 14:06:12,242 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:12,242 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36075', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:06:12,242 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:12,242 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36075
-2022-08-26 14:06:12,242 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:12,242 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37843
-2022-08-26 14:06:12,242 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:12,243 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:12,250 - distributed.scheduler - INFO - Receive client connection: Client-ea22b41e-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:12,250 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:12,278 - distributed.scheduler - INFO - Releasing all requested keys
-2022-08-26 14:06:12,278 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:06:12,281 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:06:12,281 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:06:12,282 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36075
-2022-08-26 14:06:12,282 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36149
-2022-08-26 14:06:12,282 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-d6e8b7f8-f573-4fee-9535-432141f75b8d Address tcp://127.0.0.1:36075 Status: Status.closing
-2022-08-26 14:06:12,283 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-d6c61f92-9b88-4a31-8d69-e9631fb04d26 Address tcp://127.0.0.1:36149 Status: Status.closing
-2022-08-26 14:06:12,283 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:36075', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:06:12,283 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:36075
-2022-08-26 14:06:12,283 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:36149', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:06:12,283 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:36149
-2022-08-26 14:06:12,283 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:06:12,415 - distributed.nanny - WARNING - Restarting worker
-2022-08-26 14:06:12,416 - distributed.nanny - WARNING - Restarting worker
-2022-08-26 14:06:13,032 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36653
-2022-08-26 14:06:13,033 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36653
-2022-08-26 14:06:13,033 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:06:13,033 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34287
-2022-08-26 14:06:13,033 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37843
-2022-08-26 14:06:13,033 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:13,033 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:06:13,033 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:06:13,033 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-38ckd0kx
-2022-08-26 14:06:13,033 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:13,041 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33053
-2022-08-26 14:06:13,041 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33053
-2022-08-26 14:06:13,041 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:06:13,041 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34243
-2022-08-26 14:06:13,041 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37843
-2022-08-26 14:06:13,041 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:13,041 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:06:13,041 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:06:13,041 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-z5fyjo1_
-2022-08-26 14:06:13,041 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:13,290 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33053', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:06:13,290 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33053
-2022-08-26 14:06:13,290 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:13,290 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37843
-2022-08-26 14:06:13,290 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:13,291 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:13,301 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36653', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:06:13,301 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36653
-2022-08-26 14:06:13,301 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:13,301 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37843
-2022-08-26 14:06:13,301 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:13,302 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:13,425 - distributed.scheduler - INFO - Remove client Client-ea22b41e-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:13,425 - distributed.scheduler - INFO - Remove client Client-ea22b41e-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:13,425 - distributed.scheduler - INFO - Close client connection: Client-ea22b41e-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:13,426 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:46187'.
-2022-08-26 14:06:13,426 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:06:13,426 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:35235'.
-2022-08-26 14:06:13,426 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:06:13,426 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:33053
-2022-08-26 14:06:13,427 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36653
-2022-08-26 14:06:13,427 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-4be66704-2d6b-43c1-8935-c4a2a274e6fe Address tcp://127.0.0.1:33053 Status: Status.closing
-2022-08-26 14:06:13,427 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:33053', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:06:13,427 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:33053
-2022-08-26 14:06:13,427 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-17da2c29-8a80-4cd9-9bed-d71d27d8dc7e Address tcp://127.0.0.1:36653 Status: Status.closing
-2022-08-26 14:06:13,428 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:36653', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:06:13,428 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:36653
-2022-08-26 14:06:13,428 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:06:13,559 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:06:13,559 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:06:13,758 - distributed.utils_perf - WARNING - full garbage collections took 56% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_failed_workers.py::test_restart_sync 2022-08-26 14:06:14,600 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:06:14,602 - distributed.scheduler - INFO - State start
-2022-08-26 14:06:14,605 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:06:14,605 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:44169
-2022-08-26 14:06:14,605 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:06:14,619 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:44941'
-2022-08-26 14:06:14,664 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:36991'
-2022-08-26 14:06:15,245 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43887
-2022-08-26 14:06:15,245 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43887
-2022-08-26 14:06:15,245 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39879
-2022-08-26 14:06:15,245 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44169
-2022-08-26 14:06:15,245 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:15,245 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:06:15,245 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:06:15,245 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-pj02vrlc
-2022-08-26 14:06:15,245 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:15,292 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34339
-2022-08-26 14:06:15,292 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34339
-2022-08-26 14:06:15,292 - distributed.worker - INFO -          dashboard at:            127.0.0.1:37081
-2022-08-26 14:06:15,292 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44169
-2022-08-26 14:06:15,292 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:15,292 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:06:15,292 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:06:15,292 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-tp7oyxl4
-2022-08-26 14:06:15,292 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:15,518 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:43887', status: init, memory: 0, processing: 0>
-2022-08-26 14:06:15,778 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:43887
-2022-08-26 14:06:15,778 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:15,778 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44169
-2022-08-26 14:06:15,779 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:15,779 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:34339', status: init, memory: 0, processing: 0>
-2022-08-26 14:06:15,779 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:15,779 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:34339
-2022-08-26 14:06:15,779 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:15,780 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44169
-2022-08-26 14:06:15,780 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:15,780 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:15,808 - distributed.scheduler - INFO - Receive client connection: Client-ec418892-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:15,808 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:16,053 - distributed.scheduler - INFO - Releasing all requested keys
-2022-08-26 14:06:16,053 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:06:16,055 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:06:16,055 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:06:16,055 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:34339
-2022-08-26 14:06:16,055 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:43887
-2022-08-26 14:06:16,056 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-c29e40a7-542d-467e-bc2d-307306c8d531 Address tcp://127.0.0.1:34339 Status: Status.closing
-2022-08-26 14:06:16,056 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-1be74102-9151-437e-91c7-e2f6d7fc6e75 Address tcp://127.0.0.1:43887 Status: Status.closing
-2022-08-26 14:06:16,056 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:34339', status: closing, memory: 0, processing: 0>
-2022-08-26 14:06:16,056 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:34339
-2022-08-26 14:06:16,057 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:43887', status: closing, memory: 0, processing: 0>
-2022-08-26 14:06:16,057 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:43887
-2022-08-26 14:06:16,057 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:06:16,197 - distributed.nanny - WARNING - Restarting worker
-2022-08-26 14:06:16,232 - distributed.nanny - WARNING - Restarting worker
-2022-08-26 14:06:16,818 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39123
-2022-08-26 14:06:16,818 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39123
-2022-08-26 14:06:16,818 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41307
-2022-08-26 14:06:16,818 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44169
-2022-08-26 14:06:16,818 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:16,818 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:06:16,818 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:06:16,818 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-620l4ybl
-2022-08-26 14:06:16,818 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:16,853 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35997
-2022-08-26 14:06:16,853 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35997
-2022-08-26 14:06:16,853 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41347
-2022-08-26 14:06:16,853 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44169
-2022-08-26 14:06:16,853 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:16,853 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:06:16,853 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:06:16,853 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-4v1_xhen
-2022-08-26 14:06:16,853 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:17,086 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39123', status: init, memory: 0, processing: 0>
-2022-08-26 14:06:17,086 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39123
-2022-08-26 14:06:17,086 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:17,086 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44169
-2022-08-26 14:06:17,087 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:17,087 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:17,102 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:35997', status: init, memory: 0, processing: 0>
-2022-08-26 14:06:17,102 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:35997
-2022-08-26 14:06:17,102 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:17,103 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44169
-2022-08-26 14:06:17,103 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:17,103 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:17,469 - distributed.scheduler - INFO - Remove client Client-ec418892-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:17,469 - distributed.scheduler - INFO - Remove client Client-ec418892-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:17,469 - distributed.scheduler - INFO - Close client connection: Client-ec418892-2582-11ed-a99d-00d861bc4509
-PASSED
-distributed/tests/test_failed_workers.py::test_worker_doesnt_await_task_completion 2022-08-26 14:06:18,303 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:06:18,306 - distributed.scheduler - INFO - State start
-2022-08-26 14:06:18,309 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:06:18,309 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:44339
-2022-08-26 14:06:18,309 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:06:18,333 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:45381'
-2022-08-26 14:06:18,929 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-620l4ybl', purging
-2022-08-26 14:06:18,929 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-4v1_xhen', purging
-2022-08-26 14:06:18,935 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39419
-2022-08-26 14:06:18,935 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39419
-2022-08-26 14:06:18,935 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36013
-2022-08-26 14:06:18,935 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44339
-2022-08-26 14:06:18,935 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:18,935 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:06:18,935 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:06:18,935 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-6hxucy1p
-2022-08-26 14:06:18,935 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:19,200 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39419', status: init, memory: 0, processing: 0>
-2022-08-26 14:06:19,461 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39419
-2022-08-26 14:06:19,461 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:19,461 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44339
-2022-08-26 14:06:19,461 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:19,462 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:19,470 - distributed.scheduler - INFO - Receive client connection: Client-ee706939-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:19,470 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:19,572 - distributed.scheduler - INFO - Releasing all requested keys
-2022-08-26 14:06:19,572 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:06:19,574 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:06:19,575 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:39419
-2022-08-26 14:06:19,576 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-b1c4f158-1c24-4dfd-a32e-7cecae7bd2c9 Address tcp://127.0.0.1:39419 Status: Status.closing
-2022-08-26 14:06:19,576 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:39419', status: closing, memory: 0, processing: 0>
-2022-08-26 14:06:19,576 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:39419
-2022-08-26 14:06:19,576 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:06:22,776 - distributed.nanny - WARNING - Worker process still alive after 3.1999992370605472 seconds, killing
-2022-08-26 14:06:22,780 - distributed.nanny - INFO - Worker process 599418 was killed by signal 9
-2022-08-26 14:06:22,781 - distributed.nanny - WARNING - Restarting worker
-2022-08-26 14:06:22,782 - distributed.scheduler - INFO - Remove client Client-ee706939-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:22,783 - distributed.scheduler - INFO - Remove client Client-ee706939-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:22,783 - distributed.scheduler - INFO - Close client connection: Client-ee706939-2582-11ed-a99d-00d861bc4509
-PASSED
-distributed/tests/test_failed_workers.py::test_multiple_clients_restart 2022-08-26 14:06:22,795 - distributed.scheduler - INFO - State start
-2022-08-26 14:06:22,797 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:06:22,797 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:33971
-2022-08-26 14:06:22,797 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:43661
-2022-08-26 14:06:22,803 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:37545'
-2022-08-26 14:06:22,803 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:44279'
-Traceback (most recent call last):
-  File "<string>", line 1, in <module>
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/multiprocessing/spawn.py", line 116, in spawn_main
-    exitcode = _main(fd, parent_sentinel)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/multiprocessing/spawn.py", line 124, in _main
-    preparation_data = reduction.pickle.load(from_parent)
-_pickle.UnpicklingError: pickle data was truncated
-2022-08-26 14:06:23,421 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:46307
-2022-08-26 14:06:23,421 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:46307
-2022-08-26 14:06:23,421 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:06:23,421 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43559
-2022-08-26 14:06:23,421 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33971
-2022-08-26 14:06:23,421 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:23,421 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:06:23,421 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:06:23,421 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-e_ryidf8
-2022-08-26 14:06:23,421 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:23,435 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36585
-2022-08-26 14:06:23,435 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36585
-2022-08-26 14:06:23,435 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:06:23,435 - distributed.worker - INFO -          dashboard at:            127.0.0.1:38679
-2022-08-26 14:06:23,435 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33971
-2022-08-26 14:06:23,435 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:23,435 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:06:23,435 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:06:23,435 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-5zma00yo
-2022-08-26 14:06:23,435 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:23,685 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36585', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:06:23,685 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36585
-2022-08-26 14:06:23,685 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:23,685 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33971
-2022-08-26 14:06:23,686 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:23,686 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:46307', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:06:23,686 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:23,686 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:46307
-2022-08-26 14:06:23,686 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:23,686 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33971
-2022-08-26 14:06:23,687 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:23,687 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:23,693 - distributed.scheduler - INFO - Receive client connection: Client-f0f4d321-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:23,693 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:23,697 - distributed.scheduler - INFO - Receive client connection: Client-f0f563c2-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:23,697 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:23,944 - distributed.scheduler - INFO - Releasing all requested keys
-2022-08-26 14:06:23,944 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:06:23,947 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:06:23,948 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:06:23,948 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36585
-2022-08-26 14:06:23,948 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:46307
-2022-08-26 14:06:23,949 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-fb16d9f2-6a51-4409-b80e-9ec322a7fe6c Address tcp://127.0.0.1:36585 Status: Status.closing
-2022-08-26 14:06:23,949 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-d0a91217-e021-48a0-916e-7c42147b62a1 Address tcp://127.0.0.1:46307 Status: Status.closing
-2022-08-26 14:06:23,949 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:36585', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:06:23,949 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:36585
-2022-08-26 14:06:23,950 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:46307', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:06:23,950 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:46307
-2022-08-26 14:06:23,950 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:06:24,118 - distributed.nanny - WARNING - Restarting worker
-2022-08-26 14:06:24,121 - distributed.nanny - WARNING - Restarting worker
-2022-08-26 14:06:24,736 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:46151
-2022-08-26 14:06:24,736 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:46151
-2022-08-26 14:06:24,736 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:06:24,736 - distributed.worker - INFO -          dashboard at:            127.0.0.1:44071
-2022-08-26 14:06:24,736 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33971
-2022-08-26 14:06:24,736 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:24,736 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:06:24,736 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:06:24,737 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-rov37dbe
-2022-08-26 14:06:24,737 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:24,737 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37117
-2022-08-26 14:06:24,738 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37117
-2022-08-26 14:06:24,738 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:06:24,738 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43351
-2022-08-26 14:06:24,738 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33971
-2022-08-26 14:06:24,738 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:24,738 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:06:24,738 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:06:24,738 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-682rvso5
-2022-08-26 14:06:24,738 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:24,988 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37117', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:06:24,989 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37117
-2022-08-26 14:06:24,989 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:24,989 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33971
-2022-08-26 14:06:24,989 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:24,989 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:25,006 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:46151', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:06:25,006 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:46151
-2022-08-26 14:06:25,006 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:25,006 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33971
-2022-08-26 14:06:25,006 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:25,007 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:25,419 - distributed.scheduler - INFO - Remove client Client-f0f4d321-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:25,419 - distributed.scheduler - INFO - Remove client Client-f0f4d321-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:25,419 - distributed.scheduler - INFO - Close client connection: Client-f0f4d321-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:25,420 - distributed.scheduler - INFO - Remove client Client-f0f563c2-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:25,420 - distributed.scheduler - INFO - Remove client Client-f0f563c2-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:25,420 - distributed.scheduler - INFO - Close client connection: Client-f0f563c2-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:25,421 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:37545'.
-2022-08-26 14:06:25,421 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:06:25,421 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:44279'.
-2022-08-26 14:06:25,421 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:06:25,421 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:37117
-2022-08-26 14:06:25,422 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:46151
-2022-08-26 14:06:25,422 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-5736ef7e-610b-4478-b5e3-cc94c8cc2583 Address tcp://127.0.0.1:37117 Status: Status.closing
-2022-08-26 14:06:25,422 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:37117', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:06:25,422 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:37117
-2022-08-26 14:06:25,422 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-d1dc8796-58ac-4677-afa5-ca084d26fe4d Address tcp://127.0.0.1:46151 Status: Status.closing
-2022-08-26 14:06:25,423 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:46151', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:06:25,423 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:46151
-2022-08-26 14:06:25,423 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:06:25,592 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:06:25,592 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:06:25,791 - distributed.utils_perf - WARNING - full garbage collections took 55% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_failed_workers.py::test_restart_scheduler 2022-08-26 14:06:25,797 - distributed.scheduler - INFO - State start
-2022-08-26 14:06:25,799 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:06:25,799 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:39855
-2022-08-26 14:06:25,799 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:37333
-2022-08-26 14:06:25,804 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:42171'
-2022-08-26 14:06:25,804 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:39373'
-2022-08-26 14:06:26,419 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:46685
-2022-08-26 14:06:26,419 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:46685
-2022-08-26 14:06:26,419 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:06:26,419 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40805
-2022-08-26 14:06:26,419 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39855
-2022-08-26 14:06:26,419 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:26,419 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:06:26,419 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:06:26,419 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-fkbi1d0m
-2022-08-26 14:06:26,419 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:26,425 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39225
-2022-08-26 14:06:26,425 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39225
-2022-08-26 14:06:26,425 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:06:26,425 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40823
-2022-08-26 14:06:26,425 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39855
-2022-08-26 14:06:26,425 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:26,425 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:06:26,425 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:06:26,425 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-o20umu9q
-2022-08-26 14:06:26,425 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:26,673 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39225', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:06:26,673 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39225
-2022-08-26 14:06:26,673 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:26,673 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39855
-2022-08-26 14:06:26,673 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:26,674 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:26,684 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:46685', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:06:26,685 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:46685
-2022-08-26 14:06:26,685 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:26,685 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39855
-2022-08-26 14:06:26,685 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:26,685 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:26,694 - distributed.scheduler - INFO - Releasing all requested keys
-2022-08-26 14:06:26,694 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:06:26,697 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:06:26,697 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:06:26,697 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:39225
-2022-08-26 14:06:26,698 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:46685
-2022-08-26 14:06:26,698 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-9d0bdc97-7aa3-478a-9eb7-187b42140976 Address tcp://127.0.0.1:39225 Status: Status.closing
-2022-08-26 14:06:26,698 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:39225', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:06:26,699 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-da0b88ce-6c72-45e0-92a3-0669fc86ce9f Address tcp://127.0.0.1:46685 Status: Status.closing
-2022-08-26 14:06:26,699 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:39225
-2022-08-26 14:06:26,699 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:46685', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:06:26,699 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:46685
-2022-08-26 14:06:26,699 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:06:26,831 - distributed.nanny - WARNING - Restarting worker
-2022-08-26 14:06:26,833 - distributed.nanny - WARNING - Restarting worker
-2022-08-26 14:06:27,447 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37545
-2022-08-26 14:06:27,447 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37545
-2022-08-26 14:06:27,447 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:06:27,447 - distributed.worker - INFO -          dashboard at:            127.0.0.1:38673
-2022-08-26 14:06:27,447 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39855
-2022-08-26 14:06:27,447 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:27,447 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:06:27,447 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:06:27,447 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-5_f18zob
-2022-08-26 14:06:27,447 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:27,453 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36095
-2022-08-26 14:06:27,453 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36095
-2022-08-26 14:06:27,453 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:06:27,453 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33987
-2022-08-26 14:06:27,453 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39855
-2022-08-26 14:06:27,453 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:27,453 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:06:27,453 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:06:27,453 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-_hhf6wro
-2022-08-26 14:06:27,453 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:27,700 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36095', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:06:27,701 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36095
-2022-08-26 14:06:27,701 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:27,701 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39855
-2022-08-26 14:06:27,701 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:27,701 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:27,712 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37545', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:06:27,712 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37545
-2022-08-26 14:06:27,712 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:27,713 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39855
-2022-08-26 14:06:27,713 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:27,713 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:27,839 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:42171'.
-2022-08-26 14:06:27,839 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:06:27,839 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:39373'.
-2022-08-26 14:06:27,840 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:06:27,840 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36095
-2022-08-26 14:06:27,840 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:37545
-2022-08-26 14:06:27,840 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-ddecf7b9-ed7c-43c4-b44e-def0810895ad Address tcp://127.0.0.1:36095 Status: Status.closing
-2022-08-26 14:06:27,841 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:36095', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:06:27,841 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:36095
-2022-08-26 14:06:27,841 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-9f14bd15-ec2e-4821-84f9-0b518b64e903 Address tcp://127.0.0.1:37545 Status: Status.closing
-2022-08-26 14:06:27,841 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:37545', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:06:27,841 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:37545
-2022-08-26 14:06:27,841 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:06:27,971 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:06:27,971 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:06:28,169 - distributed.utils_perf - WARNING - full garbage collections took 55% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_failed_workers.py::test_forgotten_futures_dont_clean_up_new_futures 2022-08-26 14:06:28,175 - distributed.scheduler - INFO - State start
-2022-08-26 14:06:28,176 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:06:28,177 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:37155
-2022-08-26 14:06:28,177 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:39687
-2022-08-26 14:06:28,182 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:35535'
-2022-08-26 14:06:28,182 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:35461'
-2022-08-26 14:06:28,801 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42773
-2022-08-26 14:06:28,801 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42773
-2022-08-26 14:06:28,801 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:06:28,801 - distributed.worker - INFO -          dashboard at:            127.0.0.1:44051
-2022-08-26 14:06:28,801 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37155
-2022-08-26 14:06:28,801 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:28,801 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:06:28,801 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:06:28,801 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-cr9j17b2
-2022-08-26 14:06:28,801 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:28,805 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34365
-2022-08-26 14:06:28,805 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34365
-2022-08-26 14:06:28,805 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:06:28,805 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35283
-2022-08-26 14:06:28,805 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37155
-2022-08-26 14:06:28,805 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:28,805 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:06:28,805 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:06:28,805 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-j48t93tt
-2022-08-26 14:06:28,805 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:29,054 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:34365', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:06:29,054 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:34365
-2022-08-26 14:06:29,055 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:29,055 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37155
-2022-08-26 14:06:29,055 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:29,055 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:29,073 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42773', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:06:29,073 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42773
-2022-08-26 14:06:29,074 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:29,074 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37155
-2022-08-26 14:06:29,074 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:29,074 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:29,123 - distributed.scheduler - INFO - Receive client connection: Client-f4316cd4-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:29,124 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:29,125 - distributed.scheduler - INFO - Releasing all requested keys
-2022-08-26 14:06:29,125 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:06:29,128 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:06:29,128 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:06:29,129 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:34365
-2022-08-26 14:06:29,129 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42773
-2022-08-26 14:06:29,129 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-692c736e-097f-4d2b-8f75-bb8278657e55 Address tcp://127.0.0.1:34365 Status: Status.closing
-2022-08-26 14:06:29,129 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:34365', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:06:29,129 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:34365
-2022-08-26 14:06:29,130 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-9fcd53d1-82a9-4d06-a42e-a72368d2ad04 Address tcp://127.0.0.1:42773 Status: Status.closing
-2022-08-26 14:06:29,130 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:42773', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:06:29,130 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:42773
-2022-08-26 14:06:29,130 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:06:29,263 - distributed.nanny - WARNING - Restarting worker
-2022-08-26 14:06:29,265 - distributed.nanny - WARNING - Restarting worker
-2022-08-26 14:06:29,880 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33523
-2022-08-26 14:06:29,880 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33523
-2022-08-26 14:06:29,880 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:06:29,880 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42417
-2022-08-26 14:06:29,881 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37155
-2022-08-26 14:06:29,881 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:29,881 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:06:29,881 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:06:29,881 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-utqailjt
-2022-08-26 14:06:29,881 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:29,886 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39151
-2022-08-26 14:06:29,886 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39151
-2022-08-26 14:06:29,886 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:06:29,886 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36975
-2022-08-26 14:06:29,886 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37155
-2022-08-26 14:06:29,886 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:29,886 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:06:29,886 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:06:29,886 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-h3qnmv7t
-2022-08-26 14:06:29,886 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:30,135 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39151', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:06:30,136 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39151
-2022-08-26 14:06:30,136 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:30,136 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37155
-2022-08-26 14:06:30,136 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:30,136 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:30,144 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33523', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:06:30,145 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33523
-2022-08-26 14:06:30,145 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:30,145 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37155
-2022-08-26 14:06:30,145 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:30,145 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:30,386 - distributed.scheduler - INFO - Remove client Client-f4316cd4-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:30,387 - distributed.scheduler - INFO - Remove client Client-f4316cd4-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:30,387 - distributed.scheduler - INFO - Close client connection: Client-f4316cd4-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:30,387 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:35535'.
-2022-08-26 14:06:30,387 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:06:30,388 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:35461'.
-2022-08-26 14:06:30,388 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:06:30,388 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:39151
-2022-08-26 14:06:30,389 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:33523
-2022-08-26 14:06:30,389 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-9d0b74f0-05b8-4535-a60f-a07f3788008d Address tcp://127.0.0.1:39151 Status: Status.closing
-2022-08-26 14:06:30,389 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:39151', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:06:30,389 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:39151
-2022-08-26 14:06:30,389 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-ecb51ddc-51f4-45aa-8ba7-e2bc62cdae4d Address tcp://127.0.0.1:33523 Status: Status.closing
-2022-08-26 14:06:30,390 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:33523', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:06:30,390 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:33523
-2022-08-26 14:06:30,390 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:06:30,558 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:06:30,558 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:06:30,756 - distributed.utils_perf - WARNING - full garbage collections took 58% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_failed_workers.py::test_broken_worker_during_computation SKIPPED
-distributed/tests/test_failed_workers.py::test_restart_during_computation 2022-08-26 14:06:30,763 - distributed.scheduler - INFO - State start
-2022-08-26 14:06:30,765 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:06:30,765 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:44645
-2022-08-26 14:06:30,765 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:34253
-2022-08-26 14:06:30,770 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:33295'
-2022-08-26 14:06:30,770 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:45971'
-2022-08-26 14:06:31,387 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33697
-2022-08-26 14:06:31,387 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33697
-2022-08-26 14:06:31,387 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:06:31,387 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36387
-2022-08-26 14:06:31,387 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44645
-2022-08-26 14:06:31,387 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:31,388 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:06:31,388 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:06:31,388 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-rdpk5znt
-2022-08-26 14:06:31,388 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:31,394 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39433
-2022-08-26 14:06:31,394 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39433
-2022-08-26 14:06:31,394 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:06:31,395 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36311
-2022-08-26 14:06:31,395 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44645
-2022-08-26 14:06:31,395 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:31,395 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:06:31,395 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:06:31,395 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-vusn1tsw
-2022-08-26 14:06:31,395 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:31,644 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39433', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:06:31,644 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39433
-2022-08-26 14:06:31,645 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:31,645 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44645
-2022-08-26 14:06:31,645 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:31,645 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:31,657 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33697', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:06:31,657 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33697
-2022-08-26 14:06:31,657 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:31,657 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44645
-2022-08-26 14:06:31,657 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:31,658 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:31,710 - distributed.scheduler - INFO - Receive client connection: Client-f5bc1896-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:31,710 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:32,234 - distributed.scheduler - INFO - Releasing all requested keys
-2022-08-26 14:06:32,237 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:06:32,240 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:06:32,241 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:06:32,241 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:39433
-2022-08-26 14:06:32,242 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-5194b285-d73e-41f3-84ed-3cda9ddd2d0f Address tcp://127.0.0.1:39433 Status: Status.closing
-2022-08-26 14:06:32,242 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:33697
-2022-08-26 14:06:32,242 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:39433', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:06:32,242 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:39433
-2022-08-26 14:06:32,243 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-7f90d2ad-71f5-4c4f-be53-fc13bbc92b9a Address tcp://127.0.0.1:33697 Status: Status.closing
-2022-08-26 14:06:32,243 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:33697', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:06:32,243 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:33697
-2022-08-26 14:06:32,243 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:06:32,412 - distributed.nanny - WARNING - Restarting worker
-2022-08-26 14:06:32,418 - distributed.nanny - WARNING - Restarting worker
-2022-08-26 14:06:33,030 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37411
-2022-08-26 14:06:33,030 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37411
-2022-08-26 14:06:33,030 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:06:33,030 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:44683
-2022-08-26 14:06:33,030 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36471
-2022-08-26 14:06:33,030 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44645
-2022-08-26 14:06:33,030 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:44683
-2022-08-26 14:06:33,030 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:33,030 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:06:33,030 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:06:33,031 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42471
-2022-08-26 14:06:33,031 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:06:33,031 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44645
-2022-08-26 14:06:33,031 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-uoqz6rwq
-2022-08-26 14:06:33,031 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:33,031 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:06:33,031 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:33,031 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:06:33,031 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-7_gi158v
-2022-08-26 14:06:33,031 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:33,279 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:44683', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:06:33,280 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:44683
-2022-08-26 14:06:33,280 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:33,280 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44645
-2022-08-26 14:06:33,280 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:33,281 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:33,297 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37411', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:06:33,297 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37411
-2022-08-26 14:06:33,297 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:33,297 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44645
-2022-08-26 14:06:33,297 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:33,298 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:33,425 - distributed.scheduler - INFO - Remove client Client-f5bc1896-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:33,425 - distributed.scheduler - INFO - Remove client Client-f5bc1896-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:33,426 - distributed.scheduler - INFO - Close client connection: Client-f5bc1896-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:33,426 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:33295'.
-2022-08-26 14:06:33,426 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:06:33,426 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:45971'.
-2022-08-26 14:06:33,427 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:06:33,427 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:37411
-2022-08-26 14:06:33,427 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:44683
-2022-08-26 14:06:33,427 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-5742d06b-c2f2-4d32-b2d5-d8e0a6a3927c Address tcp://127.0.0.1:37411 Status: Status.closing
-2022-08-26 14:06:33,427 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:37411', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:06:33,428 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:37411
-2022-08-26 14:06:33,428 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-72582ebd-aac8-42af-9a8e-de003b4fc97f Address tcp://127.0.0.1:44683 Status: Status.closing
-2022-08-26 14:06:33,428 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:44683', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:06:33,428 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:44683
-2022-08-26 14:06:33,428 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:06:33,558 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:06:33,558 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:06:33,758 - distributed.utils_perf - WARNING - full garbage collections took 58% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_failed_workers.py::test_worker_who_has_clears_after_failed_connection SKIPPED
-distributed/tests/test_failed_workers.py::test_worker_same_host_replicas_missing 2022-08-26 14:06:33,765 - distributed.scheduler - INFO - State start
-2022-08-26 14:06:33,766 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:06:33,767 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:43731
-2022-08-26 14:06:33,767 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:37419
-2022-08-26 14:06:33,773 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:44107
-2022-08-26 14:06:33,773 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:44107
-2022-08-26 14:06:33,773 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:06:33,773 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34935
-2022-08-26 14:06:33,773 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43731
-2022-08-26 14:06:33,773 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:33,773 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:06:33,773 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:06:33,773 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-9xmhauzk
-2022-08-26 14:06:33,773 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:33,774 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39145
-2022-08-26 14:06:33,774 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39145
-2022-08-26 14:06:33,774 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:06:33,774 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36657
-2022-08-26 14:06:33,774 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43731
-2022-08-26 14:06:33,774 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:33,774 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:06:33,774 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:06:33,774 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ldod00ph
-2022-08-26 14:06:33,774 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:33,775 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37749
-2022-08-26 14:06:33,775 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37749
-2022-08-26 14:06:33,775 - distributed.worker - INFO -           Worker name:                          2
-2022-08-26 14:06:33,775 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39663
-2022-08-26 14:06:33,775 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43731
-2022-08-26 14:06:33,775 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:33,775 - distributed.worker - INFO -               Threads:                          3
-2022-08-26 14:06:33,775 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:06:33,775 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-c0dg43mu
-2022-08-26 14:06:33,776 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:33,779 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:44107', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:06:33,779 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:44107
-2022-08-26 14:06:33,780 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:33,780 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39145', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:06:33,780 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39145
-2022-08-26 14:06:33,780 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:33,781 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37749', name: 2, status: init, memory: 0, processing: 0>
-2022-08-26 14:06:33,781 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37749
-2022-08-26 14:06:33,781 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:33,781 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43731
-2022-08-26 14:06:33,781 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:33,782 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43731
-2022-08-26 14:06:33,782 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:33,782 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43731
-2022-08-26 14:06:33,782 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:33,782 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:33,782 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:33,782 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:33,796 - distributed.scheduler - INFO - Receive client connection: Client-f6fa74de-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:33,796 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:34,396 - distributed.scheduler - INFO - Remove client Client-f6fa74de-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:34,396 - distributed.scheduler - INFO - Remove client Client-f6fa74de-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:34,397 - distributed.scheduler - INFO - Close client connection: Client-f6fa74de-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:34,403 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:44107
-2022-08-26 14:06:34,403 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:39145
-2022-08-26 14:06:34,404 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:37749
-2022-08-26 14:06:34,405 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:44107', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:06:34,405 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:44107
-2022-08-26 14:06:34,405 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:39145', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:06:34,405 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:39145
-2022-08-26 14:06:34,405 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:37749', name: 2, status: closing, memory: 0, processing: 0>
-2022-08-26 14:06:34,405 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:37749
-2022-08-26 14:06:34,405 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:06:34,406 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-84080945-2cd3-417a-bba3-d239a359cf95 Address tcp://127.0.0.1:44107 Status: Status.closing
-2022-08-26 14:06:34,406 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-5dd9fc83-80c2-4943-8c93-5a7b6dda3c57 Address tcp://127.0.0.1:39145 Status: Status.closing
-2022-08-26 14:06:34,406 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-ff428d65-0396-4f5f-af4e-c74e4c490f8f Address tcp://127.0.0.1:37749 Status: Status.closing
-2022-08-26 14:06:34,409 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:06:34,409 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:06:34,610 - distributed.utils_perf - WARNING - full garbage collections took 57% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_failed_workers.py::test_restart_timeout_on_long_running_task SKIPPED
-distributed/tests/test_failed_workers.py::test_worker_time_to_live SKIPPED
-distributed/tests/test_failed_workers.py::test_forget_data_not_supposed_to_have 2022-08-26 14:06:34,618 - distributed.scheduler - INFO - State start
-2022-08-26 14:06:34,619 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:06:34,620 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:46769
-2022-08-26 14:06:34,620 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:33779
-2022-08-26 14:06:34,622 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:44541
-2022-08-26 14:06:34,622 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:44541
-2022-08-26 14:06:34,623 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:06:34,623 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39425
-2022-08-26 14:06:34,623 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:46769
-2022-08-26 14:06:34,623 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:34,623 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:06:34,623 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:06:34,623 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-f9xt0hqc
-2022-08-26 14:06:34,623 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:34,625 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:44541', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:06:34,625 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:44541
-2022-08-26 14:06:34,625 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:34,625 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:46769
-2022-08-26 14:06:34,625 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:34,626 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:34,639 - distributed.scheduler - INFO - Receive client connection: Client-f77b0f3c-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:34,639 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:34,642 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:46153
-2022-08-26 14:06:34,642 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:46153
-2022-08-26 14:06:34,642 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45307
-2022-08-26 14:06:34,642 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:46769
-2022-08-26 14:06:34,642 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:34,643 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:06:34,643 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:06:34,643 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ehtr7sf3
-2022-08-26 14:06:34,643 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:34,645 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:46153', status: init, memory: 0, processing: 0>
-2022-08-26 14:06:34,645 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:46153
-2022-08-26 14:06:34,645 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:34,645 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:46769
-2022-08-26 14:06:34,645 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:34,647 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:34,675 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:46153
-2022-08-26 14:06:34,676 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:46153', status: closing, memory: 0, processing: 0>
-2022-08-26 14:06:34,676 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:46153
-2022-08-26 14:06:34,676 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: BlockedGatherDep-cfaf4312-b7ef-472a-894d-995a661effc7 Address tcp://127.0.0.1:46153 Status: Status.closing
-2022-08-26 14:06:34,677 - distributed.scheduler - INFO - Remove client Client-f77b0f3c-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:34,677 - distributed.scheduler - INFO - Remove client Client-f77b0f3c-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:34,678 - distributed.scheduler - INFO - Close client connection: Client-f77b0f3c-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:34,678 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:44541
-2022-08-26 14:06:34,679 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:44541', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:06:34,679 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:44541
-2022-08-26 14:06:34,679 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:06:34,679 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-4980e0c7-ab04-427c-92b7-9d78c56f926d Address tcp://127.0.0.1:44541 Status: Status.closing
-2022-08-26 14:06:34,680 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:06:34,680 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:06:34,879 - distributed.utils_perf - WARNING - full garbage collections took 57% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_failed_workers.py::test_failing_worker_with_additional_replicas_on_cluster 2022-08-26 14:06:34,885 - distributed.scheduler - INFO - State start
-2022-08-26 14:06:34,887 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:06:34,887 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:46881
-2022-08-26 14:06:34,887 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:39435
-2022-08-26 14:06:34,894 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:36617'
-2022-08-26 14:06:34,895 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:45537'
-2022-08-26 14:06:34,895 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:36981'
-2022-08-26 14:06:35,520 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43511
-2022-08-26 14:06:35,520 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43511
-2022-08-26 14:06:35,520 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:06:35,520 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35007
-2022-08-26 14:06:35,520 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:46881
-2022-08-26 14:06:35,520 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:35,520 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:06:35,520 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:06:35,520 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-67ikwzi1
-2022-08-26 14:06:35,520 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:35,528 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36435
-2022-08-26 14:06:35,528 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36435
-2022-08-26 14:06:35,528 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:06:35,528 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42221
-2022-08-26 14:06:35,528 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:46881
-2022-08-26 14:06:35,528 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:35,528 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:06:35,528 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:06:35,528 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-luizmgcl
-2022-08-26 14:06:35,528 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:35,531 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33369
-2022-08-26 14:06:35,531 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33369
-2022-08-26 14:06:35,531 - distributed.worker - INFO -           Worker name:                          2
-2022-08-26 14:06:35,531 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39163
-2022-08-26 14:06:35,531 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:46881
-2022-08-26 14:06:35,531 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:35,531 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:06:35,531 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:06:35,531 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-nrrx_4di
-2022-08-26 14:06:35,531 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:35,785 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33369', name: 2, status: init, memory: 0, processing: 0>
-2022-08-26 14:06:35,785 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33369
-2022-08-26 14:06:35,785 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:35,785 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:46881
-2022-08-26 14:06:35,786 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:35,786 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:35,790 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36435', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:06:35,791 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36435
-2022-08-26 14:06:35,791 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:35,791 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:46881
-2022-08-26 14:06:35,791 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:35,791 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:35,804 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:43511', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:06:35,804 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:43511
-2022-08-26 14:06:35,805 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:35,805 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:46881
-2022-08-26 14:06:35,805 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:35,805 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:35,839 - distributed.scheduler - INFO - Receive client connection: Client-f83234cb-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:35,840 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:36,422 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:36617'.
-2022-08-26 14:06:36,422 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:06:36,423 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36435
-2022-08-26 14:06:36,424 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-8fba95be-e8ac-4aa5-8bf9-da1406a0a6c0 Address tcp://127.0.0.1:36435 Status: Status.closing
-2022-08-26 14:06:36,424 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:36435', name: 0, status: closing, memory: 1, processing: 0>
-2022-08-26 14:06:36,424 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:36435
-2022-08-26 14:06:37,591 - distributed.worker - ERROR - Worker stream died during communication: tcp://127.0.0.1:36435
-ConnectionRefusedError: [Errno 111] Connection refused
-
-The above exception was the direct cause of the following exception:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/core.py", line 291, in connect
-    comm = await asyncio.wait_for(
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/tasks.py", line 445, in wait_for
-    return fut.result()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 496, in connect
-    convert_stream_closed_error(self, e)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 142, in convert_stream_closed_error
-    raise CommClosedError(f"in {obj}: {exc.__class__.__name__}: {exc}") from exc
-distributed.comm.core.CommClosedError: in <distributed.comm.tcp.TCPConnector object at 0x5558dc388050>: ConnectionRefusedError: [Errno 111] Connection refused
-
-The above exception was the direct cause of the following exception:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 1992, in gather_dep
-    response = await get_data_from_worker(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 2731, in get_data_from_worker
-    return await retry_operation(_get_data, operation="get_data_from_worker")
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/utils_comm.py", line 383, in retry_operation
-    return await retry(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/utils_comm.py", line 368, in retry
-    return await coro()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 2708, in _get_data
-    comm = await rpc.connect(worker)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 1372, in connect
-    return await connect_attempt
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 1308, in _connect
-    comm = await connect(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/core.py", line 317, in connect
-    raise OSError(
-OSError: Timed out trying to connect to tcp://127.0.0.1:36435 after 1 s
-2022-08-26 14:06:38,257 - distributed.scheduler - INFO - Remove client Client-f83234cb-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:38,257 - distributed.scheduler - INFO - Remove client Client-f83234cb-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:38,257 - distributed.scheduler - INFO - Close client connection: Client-f83234cb-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:38,257 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:45537'.
-2022-08-26 14:06:38,258 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:06:38,258 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:36981'.
-2022-08-26 14:06:38,258 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:06:38,258 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:43511
-2022-08-26 14:06:38,258 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:33369
-2022-08-26 14:06:38,259 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-7664ab4e-b2cf-4097-9d8a-2b0e83af2711 Address tcp://127.0.0.1:43511 Status: Status.closing
-2022-08-26 14:06:38,259 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:43511', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:06:38,259 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-390aa222-c443-4a63-8bfc-d1230974758f Address tcp://127.0.0.1:33369 Status: Status.closing
-2022-08-26 14:06:38,259 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:43511
-2022-08-26 14:06:38,260 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:33369', name: 2, status: closing, memory: 0, processing: 0>
-2022-08-26 14:06:38,260 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:33369
-2022-08-26 14:06:38,260 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:06:38,432 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:06:38,432 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:06:38,631 - distributed.utils_perf - WARNING - full garbage collections took 60% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_imports.py::test_can_import_distributed_in_background_thread PASSED
-distributed/tests/test_init.py::test_version PASSED
-distributed/tests/test_init.py::test_git_revision FAILED
-distributed/tests/test_locks.py::test_lock 2022-08-26 14:06:39,771 - distributed.scheduler - INFO - State start
-2022-08-26 14:06:39,773 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:06:39,773 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:39957
-2022-08-26 14:06:39,773 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:39315
-2022-08-26 14:06:39,778 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38257
-2022-08-26 14:06:39,778 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38257
-2022-08-26 14:06:39,778 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:06:39,778 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36515
-2022-08-26 14:06:39,778 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39957
-2022-08-26 14:06:39,778 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:39,778 - distributed.worker - INFO -               Threads:                          8
-2022-08-26 14:06:39,778 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:06:39,778 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-x8cb9_sa
-2022-08-26 14:06:39,778 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:39,779 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39593
-2022-08-26 14:06:39,779 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39593
-2022-08-26 14:06:39,779 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:06:39,779 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45877
-2022-08-26 14:06:39,779 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39957
-2022-08-26 14:06:39,779 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:39,779 - distributed.worker - INFO -               Threads:                          8
-2022-08-26 14:06:39,779 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:06:39,779 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-1ix9mw96
-2022-08-26 14:06:39,779 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:39,782 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38257', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:06:39,782 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38257
-2022-08-26 14:06:39,782 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:39,783 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39593', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:06:39,783 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39593
-2022-08-26 14:06:39,783 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:39,783 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39957
-2022-08-26 14:06:39,783 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:39,783 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39957
-2022-08-26 14:06:39,783 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:39,784 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:39,784 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:39,797 - distributed.scheduler - INFO - Receive client connection: Client-fa8e2a5b-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:39,798 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:40,939 - distributed.scheduler - INFO - Remove client Client-fa8e2a5b-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:40,940 - distributed.scheduler - INFO - Remove client Client-fa8e2a5b-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:40,940 - distributed.scheduler - INFO - Close client connection: Client-fa8e2a5b-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:40,942 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:38257
-2022-08-26 14:06:40,942 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:39593
-2022-08-26 14:06:40,943 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:38257', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:06:40,943 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:38257
-2022-08-26 14:06:40,943 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:39593', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:06:40,943 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:39593
-2022-08-26 14:06:40,943 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:06:40,944 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-c947090d-eafb-4080-9518-c9c4707d2ba8 Address tcp://127.0.0.1:38257 Status: Status.closing
-2022-08-26 14:06:40,944 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-909dae14-9e64-4c25-b9d6-60e90035ece0 Address tcp://127.0.0.1:39593 Status: Status.closing
-2022-08-26 14:06:40,946 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:06:40,946 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:06:41,149 - distributed.utils_perf - WARNING - full garbage collections took 59% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_locks.py::test_timeout 2022-08-26 14:06:41,156 - distributed.scheduler - INFO - State start
-2022-08-26 14:06:41,157 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:06:41,157 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:37857
-2022-08-26 14:06:41,157 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:38777
-2022-08-26 14:06:41,162 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37577
-2022-08-26 14:06:41,162 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37577
-2022-08-26 14:06:41,162 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:06:41,162 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36309
-2022-08-26 14:06:41,162 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37857
-2022-08-26 14:06:41,162 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:41,162 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:06:41,162 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:06:41,162 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-t1iy_ro8
-2022-08-26 14:06:41,162 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:41,163 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:44079
-2022-08-26 14:06:41,163 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:44079
-2022-08-26 14:06:41,163 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:06:41,163 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35793
-2022-08-26 14:06:41,163 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37857
-2022-08-26 14:06:41,163 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:41,163 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:06:41,163 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:06:41,163 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-2fpi9q44
-2022-08-26 14:06:41,163 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:41,166 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37577', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:06:41,166 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37577
-2022-08-26 14:06:41,166 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:41,167 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:44079', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:06:41,167 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:44079
-2022-08-26 14:06:41,167 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:41,167 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37857
-2022-08-26 14:06:41,167 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:41,168 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37857
-2022-08-26 14:06:41,168 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:41,168 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:41,168 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:41,182 - distributed.scheduler - INFO - Receive client connection: Client-fb6161fc-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:41,182 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:41,285 - distributed.scheduler - INFO - Remove client Client-fb6161fc-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:41,285 - distributed.scheduler - INFO - Remove client Client-fb6161fc-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:41,285 - distributed.scheduler - INFO - Close client connection: Client-fb6161fc-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:41,286 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:37577
-2022-08-26 14:06:41,286 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:44079
-2022-08-26 14:06:41,287 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:37577', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:06:41,287 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:37577
-2022-08-26 14:06:41,287 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:44079', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:06:41,287 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:44079
-2022-08-26 14:06:41,287 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:06:41,287 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-86fc9460-39af-4ffa-a823-c71ea84f7c09 Address tcp://127.0.0.1:37577 Status: Status.closing
-2022-08-26 14:06:41,288 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-28de60ae-28fb-4d46-8f01-d6277864b6f8 Address tcp://127.0.0.1:44079 Status: Status.closing
-2022-08-26 14:06:41,288 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:06:41,289 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:06:41,489 - distributed.utils_perf - WARNING - full garbage collections took 59% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_locks.py::test_acquires_with_zero_timeout 2022-08-26 14:06:41,496 - distributed.scheduler - INFO - State start
-2022-08-26 14:06:41,497 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:06:41,498 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:39001
-2022-08-26 14:06:41,498 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:39825
-2022-08-26 14:06:41,502 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42289
-2022-08-26 14:06:41,502 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42289
-2022-08-26 14:06:41,502 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:06:41,502 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36015
-2022-08-26 14:06:41,502 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39001
-2022-08-26 14:06:41,502 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:41,502 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:06:41,502 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:06:41,502 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-k8vv5duh
-2022-08-26 14:06:41,503 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:41,503 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33007
-2022-08-26 14:06:41,503 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33007
-2022-08-26 14:06:41,503 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:06:41,503 - distributed.worker - INFO -          dashboard at:            127.0.0.1:38695
-2022-08-26 14:06:41,503 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39001
-2022-08-26 14:06:41,503 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:41,503 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:06:41,503 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:06:41,503 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-v29urc5f
-2022-08-26 14:06:41,503 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:41,506 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42289', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:06:41,507 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42289
-2022-08-26 14:06:41,507 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:41,507 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33007', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:06:41,507 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33007
-2022-08-26 14:06:41,507 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:41,508 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39001
-2022-08-26 14:06:41,508 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:41,508 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39001
-2022-08-26 14:06:41,508 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:41,508 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:41,508 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:41,522 - distributed.scheduler - INFO - Receive client connection: Client-fb9548f6-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:41,522 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:41,533 - distributed.scheduler - INFO - Remove client Client-fb9548f6-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:41,534 - distributed.scheduler - INFO - Remove client Client-fb9548f6-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:41,534 - distributed.scheduler - INFO - Close client connection: Client-fb9548f6-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:41,534 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42289
-2022-08-26 14:06:41,534 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:33007
-2022-08-26 14:06:41,535 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:42289', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:06:41,535 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:42289
-2022-08-26 14:06:41,536 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:33007', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:06:41,536 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:33007
-2022-08-26 14:06:41,536 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:06:41,536 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-e2cf01eb-5334-497b-8896-ef3440bd64f6 Address tcp://127.0.0.1:42289 Status: Status.closing
-2022-08-26 14:06:41,536 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-fbc2e219-3199-4dbd-a261-1ae954198a40 Address tcp://127.0.0.1:33007 Status: Status.closing
-2022-08-26 14:06:41,537 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:06:41,537 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:06:41,737 - distributed.utils_perf - WARNING - full garbage collections took 66% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_locks.py::test_acquires_blocking 2022-08-26 14:06:41,743 - distributed.scheduler - INFO - State start
-2022-08-26 14:06:41,744 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:06:41,744 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:45981
-2022-08-26 14:06:41,744 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:43811
-2022-08-26 14:06:41,749 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40903
-2022-08-26 14:06:41,749 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40903
-2022-08-26 14:06:41,749 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:06:41,749 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35421
-2022-08-26 14:06:41,749 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:45981
-2022-08-26 14:06:41,749 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:41,749 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:06:41,749 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:06:41,749 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-i9_318fh
-2022-08-26 14:06:41,749 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:41,750 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:41719
-2022-08-26 14:06:41,750 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:41719
-2022-08-26 14:06:41,750 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:06:41,750 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39259
-2022-08-26 14:06:41,750 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:45981
-2022-08-26 14:06:41,750 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:41,750 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:06:41,750 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:06:41,750 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-tux60vf2
-2022-08-26 14:06:41,750 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:41,753 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40903', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:06:41,753 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40903
-2022-08-26 14:06:41,753 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:41,754 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:41719', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:06:41,754 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:41719
-2022-08-26 14:06:41,754 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:41,754 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:45981
-2022-08-26 14:06:41,754 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:41,755 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:45981
-2022-08-26 14:06:41,755 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:41,755 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:41,755 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:41,769 - distributed.scheduler - INFO - Receive client connection: Client-fbbaef9e-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:41,769 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:41,780 - distributed.scheduler - INFO - Remove client Client-fbbaef9e-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:41,780 - distributed.scheduler - INFO - Remove client Client-fbbaef9e-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:41,780 - distributed.scheduler - INFO - Close client connection: Client-fbbaef9e-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:41,781 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:40903
-2022-08-26 14:06:41,781 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:41719
-2022-08-26 14:06:41,782 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:40903', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:06:41,782 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:40903
-2022-08-26 14:06:41,782 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:41719', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:06:41,782 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:41719
-2022-08-26 14:06:41,782 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:06:41,782 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-9c5f8758-a0c5-4895-89d0-63f1dd9e0f19 Address tcp://127.0.0.1:40903 Status: Status.closing
-2022-08-26 14:06:41,782 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-5714f760-93d7-48f8-8d87-7da7caffcc10 Address tcp://127.0.0.1:41719 Status: Status.closing
-2022-08-26 14:06:41,783 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:06:41,783 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:06:41,983 - distributed.utils_perf - WARNING - full garbage collections took 69% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_locks.py::test_timeout_sync 2022-08-26 14:06:42,830 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:06:42,832 - distributed.scheduler - INFO - State start
-2022-08-26 14:06:42,835 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:06:42,835 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:34035
-2022-08-26 14:06:42,835 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:06:42,852 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:44281
-2022-08-26 14:06:42,852 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:44281
-2022-08-26 14:06:42,852 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36797
-2022-08-26 14:06:42,852 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34035
-2022-08-26 14:06:42,852 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:42,852 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:06:42,852 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:06:42,852 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-f_gw8y1f
-2022-08-26 14:06:42,852 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:42,897 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36841
-2022-08-26 14:06:42,897 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36841
-2022-08-26 14:06:42,897 - distributed.worker - INFO -          dashboard at:            127.0.0.1:44181
-2022-08-26 14:06:42,897 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34035
-2022-08-26 14:06:42,897 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:42,897 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:06:42,897 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:06:42,897 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-fx85skj2
-2022-08-26 14:06:42,897 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:43,129 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:44281', status: init, memory: 0, processing: 0>
-2022-08-26 14:06:43,386 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:44281
-2022-08-26 14:06:43,386 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:43,386 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34035
-2022-08-26 14:06:43,386 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:43,387 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36841', status: init, memory: 0, processing: 0>
-2022-08-26 14:06:43,387 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:43,388 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36841
-2022-08-26 14:06:43,388 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:43,388 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34035
-2022-08-26 14:06:43,388 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:43,389 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:43,394 - distributed.scheduler - INFO - Receive client connection: Client-fcb2eaca-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:43,394 - distributed.core - INFO - Starting established connection
-PASSED2022-08-26 14:06:43,498 - distributed.scheduler - INFO - Remove client Client-fcb2eaca-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:43,498 - distributed.scheduler - INFO - Remove client Client-fcb2eaca-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:43,498 - distributed.scheduler - INFO - Close client connection: Client-fcb2eaca-2582-11ed-a99d-00d861bc4509
-
-distributed/tests/test_locks.py::test_errors 2022-08-26 14:06:43,513 - distributed.scheduler - INFO - State start
-2022-08-26 14:06:43,514 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:06:43,515 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:34225
-2022-08-26 14:06:43,515 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:36771
-2022-08-26 14:06:43,515 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-f_gw8y1f', purging
-2022-08-26 14:06:43,515 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-fx85skj2', purging
-2022-08-26 14:06:43,519 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:41009
-2022-08-26 14:06:43,520 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:41009
-2022-08-26 14:06:43,520 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:06:43,520 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36347
-2022-08-26 14:06:43,520 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34225
-2022-08-26 14:06:43,520 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:43,520 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:06:43,520 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:06:43,520 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-4r0hv3o1
-2022-08-26 14:06:43,520 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:43,520 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33143
-2022-08-26 14:06:43,521 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33143
-2022-08-26 14:06:43,521 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:06:43,521 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40677
-2022-08-26 14:06:43,521 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34225
-2022-08-26 14:06:43,521 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:43,521 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:06:43,521 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:06:43,521 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-1bhsdi5q
-2022-08-26 14:06:43,521 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:43,524 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:41009', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:06:43,524 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:41009
-2022-08-26 14:06:43,524 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:43,525 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33143', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:06:43,525 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33143
-2022-08-26 14:06:43,525 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:43,525 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34225
-2022-08-26 14:06:43,525 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:43,526 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34225
-2022-08-26 14:06:43,526 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:43,526 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:43,526 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:43,540 - distributed.scheduler - INFO - Receive client connection: Client-fcc92fab-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:43,540 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:43,551 - distributed.scheduler - INFO - Remove client Client-fcc92fab-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:43,551 - distributed.scheduler - INFO - Remove client Client-fcc92fab-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:43,552 - distributed.scheduler - INFO - Close client connection: Client-fcc92fab-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:43,552 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:41009
-2022-08-26 14:06:43,552 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:33143
-2022-08-26 14:06:43,553 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:41009', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:06:43,553 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:41009
-2022-08-26 14:06:43,554 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:33143', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:06:43,554 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:33143
-2022-08-26 14:06:43,554 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:06:43,554 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-52282357-825b-441a-8934-5f76fe762f35 Address tcp://127.0.0.1:41009 Status: Status.closing
-2022-08-26 14:06:43,554 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-279c6ef3-c082-4eb5-8cab-d50b01c93eeb Address tcp://127.0.0.1:33143 Status: Status.closing
-2022-08-26 14:06:43,555 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:06:43,555 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:06:43,754 - distributed.utils_perf - WARNING - full garbage collections took 71% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_locks.py::test_lock_sync 2022-08-26 14:06:44,602 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:06:44,604 - distributed.scheduler - INFO - State start
-2022-08-26 14:06:44,607 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:06:44,607 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:42085
-2022-08-26 14:06:44,607 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:06:44,629 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37575
-2022-08-26 14:06:44,630 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37575
-2022-08-26 14:06:44,630 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34441
-2022-08-26 14:06:44,630 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:42085
-2022-08-26 14:06:44,630 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:44,630 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:06:44,630 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:06:44,630 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-47xhssxi
-2022-08-26 14:06:44,630 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:44,677 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36641
-2022-08-26 14:06:44,677 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36641
-2022-08-26 14:06:44,677 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42013
-2022-08-26 14:06:44,677 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:42085
-2022-08-26 14:06:44,677 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:44,677 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:06:44,677 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:06:44,677 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-8v56g02w
-2022-08-26 14:06:44,677 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:44,913 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37575', status: init, memory: 0, processing: 0>
-2022-08-26 14:06:45,172 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37575
-2022-08-26 14:06:45,172 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:45,172 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:42085
-2022-08-26 14:06:45,173 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:45,173 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36641', status: init, memory: 0, processing: 0>
-2022-08-26 14:06:45,174 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36641
-2022-08-26 14:06:45,174 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:45,174 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:45,174 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:42085
-2022-08-26 14:06:45,174 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:45,175 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:45,179 - distributed.scheduler - INFO - Receive client connection: Client-fdc36b54-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:45,180 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:45,199 - distributed.scheduler - INFO - Receive client connection: Client-worker-fdc5ddd1-2582-11ed-aa47-00d861bc4509
-2022-08-26 14:06:45,199 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:45,199 - distributed.scheduler - INFO - Receive client connection: Client-worker-fdc5e2f8-2582-11ed-aa46-00d861bc4509
-2022-08-26 14:06:45,200 - distributed.core - INFO - Starting established connection
-PASSED2022-08-26 14:06:45,754 - distributed.scheduler - INFO - Remove client Client-fdc36b54-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:45,754 - distributed.scheduler - INFO - Remove client Client-fdc36b54-2582-11ed-a99d-00d861bc4509
-
-distributed/tests/test_locks.py::test_lock_types 2022-08-26 14:06:45,768 - distributed.scheduler - INFO - State start
-2022-08-26 14:06:45,770 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:06:45,770 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:35621
-2022-08-26 14:06:45,770 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:44255
-2022-08-26 14:06:45,770 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-8v56g02w', purging
-2022-08-26 14:06:45,771 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-47xhssxi', purging
-2022-08-26 14:06:45,775 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:32861
-2022-08-26 14:06:45,775 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:32861
-2022-08-26 14:06:45,775 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:06:45,775 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36897
-2022-08-26 14:06:45,775 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35621
-2022-08-26 14:06:45,775 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:45,775 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:06:45,775 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:06:45,775 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-qs_4s_au
-2022-08-26 14:06:45,775 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:45,776 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:32927
-2022-08-26 14:06:45,776 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:32927
-2022-08-26 14:06:45,776 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:06:45,776 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36203
-2022-08-26 14:06:45,776 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35621
-2022-08-26 14:06:45,776 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:45,776 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:06:45,776 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:06:45,776 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-8m2ygfq_
-2022-08-26 14:06:45,776 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:45,779 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:32861', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:06:45,779 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:32861
-2022-08-26 14:06:45,779 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:45,779 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:32927', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:06:45,780 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:32927
-2022-08-26 14:06:45,780 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:45,780 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35621
-2022-08-26 14:06:45,780 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:45,780 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35621
-2022-08-26 14:06:45,780 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:45,781 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:45,781 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:45,794 - distributed.scheduler - INFO - Receive client connection: Client-fe2138f8-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:45,794 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:45,806 - distributed.scheduler - INFO - Remove client Client-fe2138f8-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:45,806 - distributed.scheduler - INFO - Remove client Client-fe2138f8-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:45,806 - distributed.scheduler - INFO - Close client connection: Client-fe2138f8-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:45,806 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:32861
-2022-08-26 14:06:45,807 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:32927
-2022-08-26 14:06:45,807 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:32861', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:06:45,807 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:32861
-2022-08-26 14:06:45,808 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:32927', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:06:45,808 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:32927
-2022-08-26 14:06:45,808 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:06:45,808 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-0a061d1c-7a0d-4a10-b6d9-87bd0881dfe8 Address tcp://127.0.0.1:32861 Status: Status.closing
-2022-08-26 14:06:45,808 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-c6320d0d-e209-4c0d-b5e1-d992c03933bf Address tcp://127.0.0.1:32927 Status: Status.closing
-2022-08-26 14:06:45,809 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:06:45,809 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:06:46,011 - distributed.utils_perf - WARNING - full garbage collections took 71% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_locks.py::test_serializable 2022-08-26 14:06:46,017 - distributed.scheduler - INFO - State start
-2022-08-26 14:06:46,019 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:06:46,019 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:45855
-2022-08-26 14:06:46,019 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:37895
-2022-08-26 14:06:46,023 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39877
-2022-08-26 14:06:46,024 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39877
-2022-08-26 14:06:46,024 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:06:46,024 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35931
-2022-08-26 14:06:46,024 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:45855
-2022-08-26 14:06:46,024 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:46,024 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:06:46,024 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:06:46,024 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-96afujyi
-2022-08-26 14:06:46,024 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:46,024 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:46621
-2022-08-26 14:06:46,024 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:46621
-2022-08-26 14:06:46,024 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:06:46,025 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41247
-2022-08-26 14:06:46,025 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:45855
-2022-08-26 14:06:46,025 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:46,025 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:06:46,025 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:06:46,025 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-38a4gq3o
-2022-08-26 14:06:46,025 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:46,028 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39877', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:06:46,028 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39877
-2022-08-26 14:06:46,028 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:46,028 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:46621', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:06:46,029 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:46621
-2022-08-26 14:06:46,029 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:46,029 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:45855
-2022-08-26 14:06:46,029 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:46,029 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:45855
-2022-08-26 14:06:46,029 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:46,030 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:46,030 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:46,044 - distributed.scheduler - INFO - Receive client connection: Client-fe473cbf-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:46,044 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:46,102 - distributed.scheduler - INFO - Remove client Client-fe473cbf-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:46,103 - distributed.scheduler - INFO - Remove client Client-fe473cbf-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:46,103 - distributed.scheduler - INFO - Close client connection: Client-fe473cbf-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:46,104 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:39877
-2022-08-26 14:06:46,104 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:46621
-2022-08-26 14:06:46,105 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:39877', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:06:46,105 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:39877
-2022-08-26 14:06:46,105 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:46621', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:06:46,105 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:46621
-2022-08-26 14:06:46,105 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:06:46,105 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-b9074422-5a5a-4f76-b0b2-611e9c55312e Address tcp://127.0.0.1:39877 Status: Status.closing
-2022-08-26 14:06:46,106 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-d3e3553c-4b4e-4641-afe3-87c3bf4ab9d3 Address tcp://127.0.0.1:46621 Status: Status.closing
-2022-08-26 14:06:46,107 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:06:46,107 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:06:46,308 - distributed.utils_perf - WARNING - full garbage collections took 72% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_locks.py::test_locks 2022-08-26 14:06:46,314 - distributed.scheduler - INFO - State start
-2022-08-26 14:06:46,316 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:06:46,316 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:40601
-2022-08-26 14:06:46,316 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:39813
-2022-08-26 14:06:46,319 - distributed.scheduler - INFO - Receive client connection: Client-fe714f3e-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:46,320 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:46,332 - distributed.scheduler - INFO - Remove client Client-fe714f3e-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:46,332 - distributed.scheduler - INFO - Remove client Client-fe714f3e-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:46,332 - distributed.scheduler - INFO - Close client connection: Client-fe714f3e-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:46,333 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:06:46,333 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:06:46,533 - distributed.utils_perf - WARNING - full garbage collections took 75% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_metrics.py::test_wall_clock[time] PASSED
-distributed/tests/test_metrics.py::test_wall_clock[monotonic] PASSED
-distributed/tests/test_multi_locks.py::test_single_lock 2022-08-26 14:06:46,601 - distributed.scheduler - INFO - State start
-2022-08-26 14:06:46,603 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:06:46,603 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:36957
-2022-08-26 14:06:46,603 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:41265
-2022-08-26 14:06:46,607 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:44539
-2022-08-26 14:06:46,607 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:44539
-2022-08-26 14:06:46,607 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:06:46,607 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34445
-2022-08-26 14:06:46,607 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36957
-2022-08-26 14:06:46,607 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:46,608 - distributed.worker - INFO -               Threads:                          8
-2022-08-26 14:06:46,608 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:06:46,608 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-z_6fnnxz
-2022-08-26 14:06:46,608 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:46,608 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39087
-2022-08-26 14:06:46,608 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39087
-2022-08-26 14:06:46,608 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:06:46,608 - distributed.worker - INFO -          dashboard at:            127.0.0.1:38853
-2022-08-26 14:06:46,608 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36957
-2022-08-26 14:06:46,608 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:46,608 - distributed.worker - INFO -               Threads:                          8
-2022-08-26 14:06:46,608 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:06:46,609 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-s8j3bnut
-2022-08-26 14:06:46,609 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:46,611 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:44539', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:06:46,612 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:44539
-2022-08-26 14:06:46,612 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:46,612 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39087', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:06:46,612 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39087
-2022-08-26 14:06:46,612 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:46,613 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36957
-2022-08-26 14:06:46,613 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:46,613 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36957
-2022-08-26 14:06:46,613 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:46,613 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:46,613 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:46,627 - distributed.scheduler - INFO - Receive client connection: Client-fea04789-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:46,627 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:47,767 - distributed.scheduler - INFO - Remove client Client-fea04789-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:47,767 - distributed.scheduler - INFO - Remove client Client-fea04789-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:47,767 - distributed.scheduler - INFO - Close client connection: Client-fea04789-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:47,769 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:44539
-2022-08-26 14:06:47,769 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:39087
-2022-08-26 14:06:47,770 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:44539', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:06:47,770 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:44539
-2022-08-26 14:06:47,771 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:39087', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:06:47,771 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:39087
-2022-08-26 14:06:47,771 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:06:47,771 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-7b71a7a2-2366-443d-9060-f2d1434360ff Address tcp://127.0.0.1:44539 Status: Status.closing
-2022-08-26 14:06:47,771 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-dbace04e-c423-49ab-ade1-c676f99f853f Address tcp://127.0.0.1:39087 Status: Status.closing
-2022-08-26 14:06:47,773 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:06:47,774 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:06:47,973 - distributed.utils_perf - WARNING - full garbage collections took 74% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_multi_locks.py::test_timeout 2022-08-26 14:06:47,979 - distributed.scheduler - INFO - State start
-2022-08-26 14:06:47,980 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:06:47,980 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:44069
-2022-08-26 14:06:47,981 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:45985
-2022-08-26 14:06:47,985 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:46033
-2022-08-26 14:06:47,985 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:46033
-2022-08-26 14:06:47,985 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:06:47,985 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39085
-2022-08-26 14:06:47,985 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44069
-2022-08-26 14:06:47,985 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:47,985 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:06:47,985 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:06:47,985 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-j_irf5wy
-2022-08-26 14:06:47,985 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:47,986 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33435
-2022-08-26 14:06:47,986 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33435
-2022-08-26 14:06:47,986 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:06:47,986 - distributed.worker - INFO -          dashboard at:            127.0.0.1:37879
-2022-08-26 14:06:47,986 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44069
-2022-08-26 14:06:47,986 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:47,986 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:06:47,986 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:06:47,986 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-er96zurq
-2022-08-26 14:06:47,986 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:47,989 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:46033', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:06:47,989 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:46033
-2022-08-26 14:06:47,989 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:47,990 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33435', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:06:47,990 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33435
-2022-08-26 14:06:47,990 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:47,990 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44069
-2022-08-26 14:06:47,990 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:47,991 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44069
-2022-08-26 14:06:47,991 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:47,991 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:47,991 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:48,005 - distributed.scheduler - INFO - Receive client connection: Client-ff727fe7-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:48,005 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:48,108 - distributed.scheduler - INFO - Remove client Client-ff727fe7-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:48,108 - distributed.scheduler - INFO - Remove client Client-ff727fe7-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:48,108 - distributed.scheduler - INFO - Close client connection: Client-ff727fe7-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:48,109 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:46033
-2022-08-26 14:06:48,109 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:33435
-2022-08-26 14:06:48,110 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:46033', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:06:48,110 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:46033
-2022-08-26 14:06:48,110 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:33435', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:06:48,110 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:33435
-2022-08-26 14:06:48,110 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:06:48,110 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-5ebbf95f-d4cd-4e77-8e15-a0c2ef15267b Address tcp://127.0.0.1:46033 Status: Status.closing
-2022-08-26 14:06:48,111 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-e233b750-9fd7-49bd-b45f-efd8c46282a1 Address tcp://127.0.0.1:33435 Status: Status.closing
-2022-08-26 14:06:48,111 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:06:48,112 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:06:48,311 - distributed.utils_perf - WARNING - full garbage collections took 73% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_multi_locks.py::test_timeout_wake_waiter 2022-08-26 14:06:48,317 - distributed.scheduler - INFO - State start
-2022-08-26 14:06:48,318 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:06:48,319 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:45223
-2022-08-26 14:06:48,319 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:33855
-2022-08-26 14:06:48,323 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39763
-2022-08-26 14:06:48,323 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39763
-2022-08-26 14:06:48,323 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:06:48,323 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42313
-2022-08-26 14:06:48,323 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:45223
-2022-08-26 14:06:48,323 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:48,323 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:06:48,323 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:06:48,323 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-rhewnkam
-2022-08-26 14:06:48,323 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:48,324 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:41477
-2022-08-26 14:06:48,324 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:41477
-2022-08-26 14:06:48,324 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:06:48,324 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41795
-2022-08-26 14:06:48,324 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:45223
-2022-08-26 14:06:48,324 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:48,324 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:06:48,324 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:06:48,324 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-yf82btqq
-2022-08-26 14:06:48,324 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:48,327 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39763', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:06:48,327 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39763
-2022-08-26 14:06:48,327 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:48,328 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:41477', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:06:48,328 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:41477
-2022-08-26 14:06:48,328 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:48,328 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:45223
-2022-08-26 14:06:48,328 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:48,329 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:45223
-2022-08-26 14:06:48,329 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:48,329 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:48,329 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:48,344 - distributed.scheduler - INFO - Receive client connection: Client-worker-ffa613f5-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:48,344 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:48,846 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:39763
-2022-08-26 14:06:48,846 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:41477
-2022-08-26 14:06:48,847 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:39763', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:06:48,848 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:39763
-2022-08-26 14:06:48,848 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-12625c6d-25a8-4556-912e-bdc9eb7448c3 Address tcp://127.0.0.1:39763 Status: Status.closing
-2022-08-26 14:06:48,848 - distributed.scheduler - INFO - Remove client Client-worker-ffa613f5-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:48,848 - distributed.scheduler - INFO - Remove client Client-worker-ffa613f5-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:48,849 - distributed.scheduler - INFO - Close client connection: Client-worker-ffa613f5-2582-11ed-a99d-00d861bc4509
-2022-08-26 14:06:48,850 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:41477', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:06:48,850 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:41477
-2022-08-26 14:06:48,850 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:06:48,850 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-6f122f68-dd31-4bd2-b0c4-9270f3355689 Address tcp://127.0.0.1:41477 Status: Status.closing
-2022-08-26 14:06:48,850 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:06:48,851 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:06:49,049 - distributed.utils_perf - WARNING - full garbage collections took 73% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_multi_locks.py::test_multiple_locks 2022-08-26 14:06:49,055 - distributed.scheduler - INFO - State start
-2022-08-26 14:06:49,057 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:06:49,057 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:38037
-2022-08-26 14:06:49,057 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:38101
-2022-08-26 14:06:49,061 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42475
-2022-08-26 14:06:49,061 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42475
-2022-08-26 14:06:49,061 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:06:49,061 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46041
-2022-08-26 14:06:49,062 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:38037
-2022-08-26 14:06:49,062 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:49,062 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:06:49,062 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:06:49,062 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-jle2_yfu
-2022-08-26 14:06:49,062 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:49,062 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:32983
-2022-08-26 14:06:49,062 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:32983
-2022-08-26 14:06:49,062 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:06:49,062 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33253
-2022-08-26 14:06:49,062 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:38037
-2022-08-26 14:06:49,062 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:49,063 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:06:49,063 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:06:49,063 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-vdb92dgf
-2022-08-26 14:06:49,063 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:49,066 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42475', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:06:49,066 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42475
-2022-08-26 14:06:49,066 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:49,066 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:32983', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:06:49,067 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:32983
-2022-08-26 14:06:49,067 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:49,067 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:38037
-2022-08-26 14:06:49,067 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:49,067 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:38037
-2022-08-26 14:06:49,067 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:49,067 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:49,067 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:49,081 - distributed.scheduler - INFO - Receive client connection: Client-0016c006-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:06:49,081 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:49,287 - distributed.scheduler - INFO - Remove client Client-0016c006-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:06:49,287 - distributed.scheduler - INFO - Remove client Client-0016c006-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:06:49,287 - distributed.scheduler - INFO - Close client connection: Client-0016c006-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:06:49,288 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42475
-2022-08-26 14:06:49,288 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:32983
-2022-08-26 14:06:49,289 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:42475', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:06:49,289 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:42475
-2022-08-26 14:06:49,289 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:32983', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:06:49,289 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:32983
-2022-08-26 14:06:49,289 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:06:49,289 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-307b02e4-7dd6-480a-95b1-65f726083dee Address tcp://127.0.0.1:42475 Status: Status.closing
-2022-08-26 14:06:49,290 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-6234fd51-32f6-4f6b-954a-4ace1f144746 Address tcp://127.0.0.1:32983 Status: Status.closing
-2022-08-26 14:06:49,290 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:06:49,291 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:06:49,490 - distributed.utils_perf - WARNING - full garbage collections took 73% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_multi_locks.py::test_num_locks 2022-08-26 14:06:49,496 - distributed.scheduler - INFO - State start
-2022-08-26 14:06:49,497 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:06:49,497 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:46689
-2022-08-26 14:06:49,497 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:41177
-2022-08-26 14:06:49,502 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:44143
-2022-08-26 14:06:49,502 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:44143
-2022-08-26 14:06:49,502 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:06:49,502 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40489
-2022-08-26 14:06:49,502 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:46689
-2022-08-26 14:06:49,502 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:49,502 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:06:49,502 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:06:49,502 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-segnjub2
-2022-08-26 14:06:49,502 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:49,503 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40559
-2022-08-26 14:06:49,503 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40559
-2022-08-26 14:06:49,503 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:06:49,503 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40385
-2022-08-26 14:06:49,503 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:46689
-2022-08-26 14:06:49,503 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:49,503 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:06:49,503 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:06:49,503 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-4pv741c4
-2022-08-26 14:06:49,503 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:49,506 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:44143', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:06:49,506 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:44143
-2022-08-26 14:06:49,506 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:49,506 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40559', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:06:49,507 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40559
-2022-08-26 14:06:49,507 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:49,507 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:46689
-2022-08-26 14:06:49,507 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:49,507 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:46689
-2022-08-26 14:06:49,507 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:49,508 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:49,508 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:49,521 - distributed.scheduler - INFO - Receive client connection: Client-0059ec15-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:06:49,522 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:49,828 - distributed.scheduler - INFO - Remove client Client-0059ec15-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:06:49,828 - distributed.scheduler - INFO - Remove client Client-0059ec15-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:06:49,829 - distributed.scheduler - INFO - Close client connection: Client-0059ec15-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:06:49,829 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:44143
-2022-08-26 14:06:49,830 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:40559
-2022-08-26 14:06:49,831 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:44143', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:06:49,831 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:44143
-2022-08-26 14:06:49,831 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:40559', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:06:49,831 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:40559
-2022-08-26 14:06:49,831 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:06:49,831 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-4836a4c6-a8a1-4123-b5c7-d4afb1bd23f5 Address tcp://127.0.0.1:44143 Status: Status.closing
-2022-08-26 14:06:49,832 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-e5a1fa50-605f-4791-9477-620f8282d352 Address tcp://127.0.0.1:40559 Status: Status.closing
-2022-08-26 14:06:49,832 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:06:49,833 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:06:50,032 - distributed.utils_perf - WARNING - full garbage collections took 74% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_nanny.py::test_str 2022-08-26 14:06:50,038 - distributed.scheduler - INFO - State start
-2022-08-26 14:06:50,039 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:06:50,040 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:32955
-2022-08-26 14:06:50,040 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:38275
-2022-08-26 14:06:50,045 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:33467'
-2022-08-26 14:06:50,045 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:41981'
-2022-08-26 14:06:50,665 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34251
-2022-08-26 14:06:50,665 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34251
-2022-08-26 14:06:50,665 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:06:50,665 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35873
-2022-08-26 14:06:50,665 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:32955
-2022-08-26 14:06:50,665 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:50,665 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:06:50,665 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:06:50,665 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-6a1ezqmh
-2022-08-26 14:06:50,665 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:50,676 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42715
-2022-08-26 14:06:50,676 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42715
-2022-08-26 14:06:50,676 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:06:50,676 - distributed.worker - INFO -          dashboard at:            127.0.0.1:38811
-2022-08-26 14:06:50,676 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:32955
-2022-08-26 14:06:50,676 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:50,677 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:06:50,677 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:06:50,677 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-0b3oy8i5
-2022-08-26 14:06:50,677 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:50,930 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42715', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:06:50,930 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42715
-2022-08-26 14:06:50,930 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:50,930 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:32955
-2022-08-26 14:06:50,930 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:50,931 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:50,933 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:34251', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:06:50,934 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:34251
-2022-08-26 14:06:50,934 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:50,934 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:32955
-2022-08-26 14:06:50,934 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:50,934 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:50,986 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:33467'.
-2022-08-26 14:06:50,986 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:06:50,987 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:41981'.
-2022-08-26 14:06:50,987 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:06:50,987 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:34251
-2022-08-26 14:06:50,987 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42715
-2022-08-26 14:06:50,988 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-f46bc197-bb94-4e35-bf89-a2eb1219a659 Address tcp://127.0.0.1:34251 Status: Status.closing
-2022-08-26 14:06:50,988 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:34251', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:06:50,988 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:34251
-2022-08-26 14:06:50,988 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-65a062b9-d9bd-4474-9580-a92b67807adf Address tcp://127.0.0.1:42715 Status: Status.closing
-2022-08-26 14:06:50,988 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:42715', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:06:50,988 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:42715
-2022-08-26 14:06:50,988 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:06:51,121 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:06:51,121 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:06:51,320 - distributed.utils_perf - WARNING - full garbage collections took 73% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_nanny.py::test_nanny_process_failure 2022-08-26 14:06:51,326 - distributed.scheduler - INFO - State start
-2022-08-26 14:06:51,328 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:06:51,328 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:46095
-2022-08-26 14:06:51,328 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:34305
-2022-08-26 14:06:51,331 - distributed.scheduler - INFO - Receive client connection: Client-016e0e22-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:06:51,331 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:51,335 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:35951'
-2022-08-26 14:06:51,952 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:41209
-2022-08-26 14:06:51,952 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:41209
-2022-08-26 14:06:51,952 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43861
-2022-08-26 14:06:51,952 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:46095
-2022-08-26 14:06:51,952 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:51,952 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:06:51,952 - distributed.worker - INFO -                Memory:                  10.47 GiB
-2022-08-26 14:06:51,952 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-aaf9nn88
-2022-08-26 14:06:51,952 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:52,215 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:41209', status: init, memory: 0, processing: 0>
-2022-08-26 14:06:52,215 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:41209
-2022-08-26 14:06:52,216 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:52,216 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:46095
-2022-08-26 14:06:52,216 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:52,216 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:52,228 - distributed.worker - INFO - Run out-of-band function '_exit'
-2022-08-26 14:06:52,232 - distributed.scheduler - ERROR - broadcast to tcp://127.0.0.1:41209 failed: CommClosedError: in <TCP (closed) Scheduler Broadcast local=tcp://127.0.0.1:58084 remote=tcp://127.0.0.1:41209>: Stream is closed
-2022-08-26 14:06:52,233 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:41209', status: running, memory: 0, processing: 0>
-2022-08-26 14:06:52,233 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:41209
-2022-08-26 14:06:52,233 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:06:52,234 - distributed.nanny - WARNING - Restarting worker
-2022-08-26 14:06:52,860 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36127
-2022-08-26 14:06:52,860 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36127
-2022-08-26 14:06:52,860 - distributed.worker - INFO -          dashboard at:            127.0.0.1:44899
-2022-08-26 14:06:52,860 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:46095
-2022-08-26 14:06:52,860 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:52,860 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:06:52,860 - distributed.worker - INFO -                Memory:                  10.47 GiB
-2022-08-26 14:06:52,860 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-rb5xhrfu
-2022-08-26 14:06:52,860 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:53,105 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36127', status: init, memory: 0, processing: 0>
-2022-08-26 14:06:53,106 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36127
-2022-08-26 14:06:53,106 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:53,106 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:46095
-2022-08-26 14:06:53,106 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:53,107 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:53,234 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:35951'.
-2022-08-26 14:06:53,234 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:06:53,235 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36127
-2022-08-26 14:06:53,235 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-13c27ace-8ce2-405e-9a06-dcfa0737e43a Address tcp://127.0.0.1:36127 Status: Status.closing
-2022-08-26 14:06:53,236 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:36127', status: closing, memory: 0, processing: 0>
-2022-08-26 14:06:53,236 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:36127
-2022-08-26 14:06:53,236 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:06:53,365 - distributed.scheduler - INFO - Remove client Client-016e0e22-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:06:53,365 - distributed.scheduler - INFO - Remove client Client-016e0e22-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:06:53,365 - distributed.scheduler - INFO - Close client connection: Client-016e0e22-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:06:53,365 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:06:53,366 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:06:53,565 - distributed.utils_perf - WARNING - full garbage collections took 73% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_nanny.py::test_run 2022-08-26 14:06:53,571 - distributed.scheduler - INFO - State start
-2022-08-26 14:06:53,573 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:06:53,573 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:39003
-2022-08-26 14:06:53,573 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:37227
-2022-08-26 14:06:53,576 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:45729'
-2022-08-26 14:06:54,191 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36225
-2022-08-26 14:06:54,191 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36225
-2022-08-26 14:06:54,191 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43351
-2022-08-26 14:06:54,191 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39003
-2022-08-26 14:06:54,191 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:54,191 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:06:54,191 - distributed.worker - INFO -                Memory:                  10.47 GiB
-2022-08-26 14:06:54,191 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-83rp0n70
-2022-08-26 14:06:54,191 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:54,457 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36225', status: init, memory: 0, processing: 0>
-2022-08-26 14:06:54,457 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36225
-2022-08-26 14:06:54,457 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:54,458 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39003
-2022-08-26 14:06:54,458 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:54,458 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:54,464 - distributed.worker - INFO - Run out-of-band function 'lambda'
-2022-08-26 14:06:54,464 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:45729'.
-2022-08-26 14:06:54,464 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:06:54,465 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36225
-2022-08-26 14:06:54,465 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-039ba711-c726-4cdc-82be-26c0ea2b6609 Address tcp://127.0.0.1:36225 Status: Status.closing
-2022-08-26 14:06:54,466 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:36225', status: closing, memory: 0, processing: 0>
-2022-08-26 14:06:54,466 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:36225
-2022-08-26 14:06:54,466 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:06:54,594 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:06:54,594 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:06:54,793 - distributed.utils_perf - WARNING - full garbage collections took 75% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_nanny.py::test_no_hang_when_scheduler_closes SKIPPED
-distributed/tests/test_nanny.py::test_close_on_disconnect SKIPPED (n...)
-distributed/tests/test_nanny.py::test_nanny_worker_class 2022-08-26 14:06:54,800 - distributed.scheduler - INFO - State start
-2022-08-26 14:06:54,802 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:06:54,802 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:45401
-2022-08-26 14:06:54,802 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:35139
-2022-08-26 14:06:54,807 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:43849'
-2022-08-26 14:06:54,808 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:41173'
-2022-08-26 14:06:55,422 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35155
-2022-08-26 14:06:55,422 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35155
-2022-08-26 14:06:55,422 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:06:55,423 - distributed.worker - INFO -          dashboard at:            127.0.0.1:38903
-2022-08-26 14:06:55,423 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:45401
-2022-08-26 14:06:55,423 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:55,423 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:06:55,423 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:06:55,423 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-dcnv7tbm
-2022-08-26 14:06:55,423 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:55,425 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43105
-2022-08-26 14:06:55,425 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43105
-2022-08-26 14:06:55,425 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:06:55,425 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41157
-2022-08-26 14:06:55,425 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:45401
-2022-08-26 14:06:55,425 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:55,425 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:06:55,425 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:06:55,425 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-1csf_te_
-2022-08-26 14:06:55,425 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:55,672 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:43105', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:06:55,672 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:43105
-2022-08-26 14:06:55,672 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:55,672 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:45401
-2022-08-26 14:06:55,673 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:55,673 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:55,688 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:35155', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:06:55,688 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:35155
-2022-08-26 14:06:55,688 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:55,689 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:45401
-2022-08-26 14:06:55,689 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:55,689 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:55,698 - distributed.scheduler - INFO - Receive client connection: Client-0408653f-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:06:55,698 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:55,701 - distributed.worker - INFO - Run out-of-band function 'lambda'
-2022-08-26 14:06:55,701 - distributed.worker - INFO - Run out-of-band function 'lambda'
-2022-08-26 14:06:55,709 - distributed.scheduler - INFO - Remove client Client-0408653f-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:06:55,710 - distributed.scheduler - INFO - Remove client Client-0408653f-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:06:55,710 - distributed.scheduler - INFO - Close client connection: Client-0408653f-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:06:55,710 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:43849'.
-2022-08-26 14:06:55,710 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:06:55,711 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:41173'.
-2022-08-26 14:06:55,711 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:06:55,711 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:35155
-2022-08-26 14:06:55,711 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:43105
-2022-08-26 14:06:55,712 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-2d0554f9-8e67-4543-80c3-b49dc637a378 Address tcp://127.0.0.1:35155 Status: Status.closing
-2022-08-26 14:06:55,712 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:35155', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:06:55,712 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:35155
-2022-08-26 14:06:55,712 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-2afbd03a-929d-4f46-9060-e37654deef68 Address tcp://127.0.0.1:43105 Status: Status.closing
-2022-08-26 14:06:55,712 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:43105', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:06:55,713 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:43105
-2022-08-26 14:06:55,713 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:06:55,845 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:06:55,845 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:06:56,046 - distributed.utils_perf - WARNING - full garbage collections took 75% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_nanny.py::test_nanny_alt_worker_class 2022-08-26 14:06:56,051 - distributed.scheduler - INFO - State start
-2022-08-26 14:06:56,053 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:06:56,053 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:34293
-2022-08-26 14:06:56,053 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:34173
-2022-08-26 14:06:56,058 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:37341'
-2022-08-26 14:06:56,058 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:35505'
-2022-08-26 14:06:56,907 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37751
-2022-08-26 14:06:56,907 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37751
-2022-08-26 14:06:56,907 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:06:56,907 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40185
-2022-08-26 14:06:56,907 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34293
-2022-08-26 14:06:56,907 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:56,907 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:06:56,907 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:06:56,907 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-cbx2878x
-2022-08-26 14:06:56,907 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:56,910 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:41091
-2022-08-26 14:06:56,910 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:41091
-2022-08-26 14:06:56,910 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:06:56,910 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35183
-2022-08-26 14:06:56,910 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34293
-2022-08-26 14:06:56,910 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:56,910 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:06:56,910 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:06:56,910 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-n73ro14d
-2022-08-26 14:06:56,910 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:57,169 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:41091', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:06:57,169 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:41091
-2022-08-26 14:06:57,169 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:57,169 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34293
-2022-08-26 14:06:57,169 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:57,170 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:57,186 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37751', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:06:57,186 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37751
-2022-08-26 14:06:57,186 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:57,187 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34293
-2022-08-26 14:06:57,187 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:57,187 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:57,203 - distributed.scheduler - INFO - Receive client connection: Client-04ee1020-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:06:57,204 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:57,206 - distributed.worker - INFO - Run out-of-band function 'lambda'
-2022-08-26 14:06:57,206 - distributed.worker - INFO - Run out-of-band function 'lambda'
-2022-08-26 14:06:57,215 - distributed.scheduler - INFO - Remove client Client-04ee1020-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:06:57,216 - distributed.scheduler - INFO - Remove client Client-04ee1020-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:06:57,216 - distributed.scheduler - INFO - Close client connection: Client-04ee1020-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:06:57,216 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:37341'.
-2022-08-26 14:06:57,216 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:06:57,217 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:35505'.
-2022-08-26 14:06:57,217 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:06:57,217 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:41091
-2022-08-26 14:06:57,218 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Something-87b80e7a-d650-435a-bacb-8464e8a0ad4b Address tcp://127.0.0.1:41091 Status: Status.closing
-2022-08-26 14:06:57,218 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:37751
-2022-08-26 14:06:57,218 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:41091', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:06:57,218 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:41091
-2022-08-26 14:06:57,219 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Something-81587f19-5e1f-4da8-b6f2-17c9afd37fbd Address tcp://127.0.0.1:37751 Status: Status.closing
-2022-08-26 14:06:57,219 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:37751', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:06:57,219 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:37751
-2022-08-26 14:06:57,219 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:06:57,392 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:06:57,392 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:06:57,592 - distributed.utils_perf - WARNING - full garbage collections took 75% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_nanny.py::test_nanny_death_timeout SKIPPED (n...)
-distributed/tests/test_nanny.py::test_random_seed 2022-08-26 14:06:57,599 - distributed.scheduler - INFO - State start
-2022-08-26 14:06:57,600 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:06:57,600 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:35965
-2022-08-26 14:06:57,600 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:46371
-2022-08-26 14:06:57,606 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:34605'
-2022-08-26 14:06:57,606 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:45297'
-2022-08-26 14:06:58,225 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35061
-2022-08-26 14:06:58,225 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35061
-2022-08-26 14:06:58,226 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:06:58,226 - distributed.worker - INFO -          dashboard at:            127.0.0.1:38103
-2022-08-26 14:06:58,226 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35965
-2022-08-26 14:06:58,226 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:58,226 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:06:58,226 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:06:58,226 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-l_b3lbhy
-2022-08-26 14:06:58,226 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:58,227 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:41311
-2022-08-26 14:06:58,227 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:41311
-2022-08-26 14:06:58,227 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:06:58,227 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36063
-2022-08-26 14:06:58,227 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35965
-2022-08-26 14:06:58,227 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:58,227 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:06:58,227 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:06:58,227 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-0ew9pw6o
-2022-08-26 14:06:58,227 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:58,475 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:41311', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:06:58,476 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:41311
-2022-08-26 14:06:58,476 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:58,476 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35965
-2022-08-26 14:06:58,476 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:58,477 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:58,493 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:35061', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:06:58,493 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:35061
-2022-08-26 14:06:58,493 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:58,493 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35965
-2022-08-26 14:06:58,494 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:58,494 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:58,498 - distributed.scheduler - INFO - Receive client connection: Client-05b39fbe-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:06:58,498 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:58,531 - distributed.scheduler - INFO - Remove client Client-05b39fbe-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:06:58,531 - distributed.scheduler - INFO - Remove client Client-05b39fbe-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:06:58,532 - distributed.scheduler - INFO - Close client connection: Client-05b39fbe-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:06:58,532 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:34605'.
-2022-08-26 14:06:58,532 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:06:58,533 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:45297'.
-2022-08-26 14:06:58,533 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:06:58,533 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:41311
-2022-08-26 14:06:58,533 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:35061
-2022-08-26 14:06:58,534 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-4c3244be-af89-4b5a-8bd0-7be3aad6ca8e Address tcp://127.0.0.1:41311 Status: Status.closing
-2022-08-26 14:06:58,534 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:41311', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:06:58,534 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:41311
-2022-08-26 14:06:58,534 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-98bb002c-331f-4e9f-985b-48e5285385e6 Address tcp://127.0.0.1:35061 Status: Status.closing
-2022-08-26 14:06:58,534 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:35061', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:06:58,534 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:35061
-2022-08-26 14:06:58,535 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:06:58,680 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:06:58,680 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:06:58,880 - distributed.utils_perf - WARNING - full garbage collections took 75% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_nanny.py::test_num_fds 2022-08-26 14:06:58,885 - distributed.scheduler - INFO - State start
-2022-08-26 14:06:58,887 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:06:58,887 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:36269
-2022-08-26 14:06:58,887 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:35001
-2022-08-26 14:06:58,890 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:33295'
-2022-08-26 14:06:59,505 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34701
-2022-08-26 14:06:59,505 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34701
-2022-08-26 14:06:59,505 - distributed.worker - INFO -          dashboard at:            127.0.0.1:37467
-2022-08-26 14:06:59,505 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36269
-2022-08-26 14:06:59,505 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:59,505 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:06:59,505 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:06:59,505 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-_s_fo1oi
-2022-08-26 14:06:59,505 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:59,772 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:34701', status: init, memory: 0, processing: 0>
-2022-08-26 14:06:59,773 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:34701
-2022-08-26 14:06:59,773 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:59,773 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36269
-2022-08-26 14:06:59,773 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:06:59,774 - distributed.core - INFO - Starting established connection
-2022-08-26 14:06:59,775 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:33295'.
-2022-08-26 14:06:59,775 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:06:59,776 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:34701
-2022-08-26 14:06:59,777 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-4a9e6fd9-974e-4b36-b42c-e6e666767975 Address tcp://127.0.0.1:34701 Status: Status.closing
-2022-08-26 14:06:59,777 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:34701', status: closing, memory: 0, processing: 0>
-2022-08-26 14:06:59,777 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:34701
-2022-08-26 14:06:59,777 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:07:00,105 - distributed.utils_perf - WARNING - full garbage collections took 76% CPU time recently (threshold: 10%)
-2022-08-26 14:07:00,109 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:39297'
-2022-08-26 14:07:00,715 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39965
-2022-08-26 14:07:00,715 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39965
-2022-08-26 14:07:00,715 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39493
-2022-08-26 14:07:00,715 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36269
-2022-08-26 14:07:00,715 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:00,715 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:07:00,715 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:07:00,715 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-_ekwcaul
-2022-08-26 14:07:00,715 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:00,975 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39965', status: init, memory: 0, processing: 0>
-2022-08-26 14:07:00,976 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39965
-2022-08-26 14:07:00,976 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:00,976 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36269
-2022-08-26 14:07:00,976 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:00,977 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:01,095 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:39297'.
-2022-08-26 14:07:01,095 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:07:01,096 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:39965
-2022-08-26 14:07:01,096 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-4ae0e853-2955-49bf-a35c-ffe3cc84c2cd Address tcp://127.0.0.1:39965 Status: Status.closing
-2022-08-26 14:07:01,097 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:39965', status: closing, memory: 0, processing: 0>
-2022-08-26 14:07:01,097 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:39965
-2022-08-26 14:07:01,097 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:07:01,228 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:40603'
-2022-08-26 14:07:01,844 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:46455
-2022-08-26 14:07:01,844 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:46455
-2022-08-26 14:07:01,844 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34639
-2022-08-26 14:07:01,844 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36269
-2022-08-26 14:07:01,844 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:01,844 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:07:01,844 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:07:01,844 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-lmq9yxik
-2022-08-26 14:07:01,844 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:02,107 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:46455', status: init, memory: 0, processing: 0>
-2022-08-26 14:07:02,108 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:46455
-2022-08-26 14:07:02,108 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:02,108 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36269
-2022-08-26 14:07:02,108 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:02,109 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:02,216 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:40603'.
-2022-08-26 14:07:02,216 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:07:02,216 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:46455
-2022-08-26 14:07:02,217 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-1487d946-3e69-4012-9f5e-a3040e336dcc Address tcp://127.0.0.1:46455 Status: Status.closing
-2022-08-26 14:07:02,217 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:46455', status: closing, memory: 0, processing: 0>
-2022-08-26 14:07:02,217 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:46455
-2022-08-26 14:07:02,217 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:07:02,349 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:34551'
-2022-08-26 14:07:02,963 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37845
-2022-08-26 14:07:02,963 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37845
-2022-08-26 14:07:02,963 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45289
-2022-08-26 14:07:02,963 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36269
-2022-08-26 14:07:02,963 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:02,963 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:07:02,963 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:07:02,963 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-56naclct
-2022-08-26 14:07:02,963 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:03,228 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37845', status: init, memory: 0, processing: 0>
-2022-08-26 14:07:03,229 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37845
-2022-08-26 14:07:03,229 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:03,229 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36269
-2022-08-26 14:07:03,229 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:03,230 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:03,336 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:34551'.
-2022-08-26 14:07:03,336 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:07:03,336 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:37845
-2022-08-26 14:07:03,337 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-c0f749ae-fddc-4438-97af-84f51bbb4625 Address tcp://127.0.0.1:37845 Status: Status.closing
-2022-08-26 14:07:03,337 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:37845', status: closing, memory: 0, processing: 0>
-2022-08-26 14:07:03,337 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:37845
-2022-08-26 14:07:03,337 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:07:03,464 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:07:03,465 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:07:03,662 - distributed.utils_perf - WARNING - full garbage collections took 76% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_nanny.py::test_worker_uses_same_host_as_nanny 2022-08-26 14:07:03,668 - distributed.scheduler - INFO - State start
-2022-08-26 14:07:03,670 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:07:03,670 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:36277
-2022-08-26 14:07:03,670 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:35425
-2022-08-26 14:07:03,673 - distributed.scheduler - INFO - Receive client connection: Client-08c94616-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:03,673 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:03,677 - distributed.nanny - INFO -         Start Nanny at: 'tcp://192.168.1.159:36413'
-2022-08-26 14:07:04,285 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37623
-2022-08-26 14:07:04,285 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37623
-2022-08-26 14:07:04,285 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45325
-2022-08-26 14:07:04,285 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36277
-2022-08-26 14:07:04,285 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:04,285 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:07:04,285 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:07:04,285 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-d2ouagz1
-2022-08-26 14:07:04,285 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:04,552 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37623', status: init, memory: 0, processing: 0>
-2022-08-26 14:07:04,552 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37623
-2022-08-26 14:07:04,553 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:04,553 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36277
-2022-08-26 14:07:04,553 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:04,554 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:04,564 - distributed.worker - INFO - Run out-of-band function 'func'
-2022-08-26 14:07:04,564 - distributed.nanny - INFO - Closing Nanny at 'tcp://192.168.1.159:36413'.
-2022-08-26 14:07:04,564 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:07:04,565 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:37623
-2022-08-26 14:07:04,565 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-8de3e3cb-0a24-4072-a379-c947f9b37314 Address tcp://127.0.0.1:37623 Status: Status.closing
-2022-08-26 14:07:04,565 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:37623', status: closing, memory: 0, processing: 0>
-2022-08-26 14:07:04,565 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:37623
-2022-08-26 14:07:04,566 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:07:04,696 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.2:44781'
-2022-08-26 14:07:05,315 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.2:44057
-2022-08-26 14:07:05,315 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.2:44057
-2022-08-26 14:07:05,315 - distributed.worker - INFO -          dashboard at:            127.0.0.2:33729
-2022-08-26 14:07:05,315 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36277
-2022-08-26 14:07:05,315 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:05,315 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:07:05,315 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:07:05,315 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-sx79du3k
-2022-08-26 14:07:05,316 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:05,580 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.2:44057', status: init, memory: 0, processing: 0>
-2022-08-26 14:07:05,581 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.2:44057
-2022-08-26 14:07:05,581 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:05,581 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36277
-2022-08-26 14:07:05,581 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:05,582 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:05,633 - distributed.worker - INFO - Run out-of-band function 'func'
-2022-08-26 14:07:05,633 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.2:44781'.
-2022-08-26 14:07:05,633 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:07:05,634 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.2:44057
-2022-08-26 14:07:05,634 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-d87692d5-7855-4e17-a2e9-29ad221c0347 Address tcp://127.0.0.2:44057 Status: Status.closing
-2022-08-26 14:07:05,635 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.2:44057', status: closing, memory: 0, processing: 0>
-2022-08-26 14:07:05,635 - distributed.core - INFO - Removing comms to tcp://127.0.0.2:44057
-2022-08-26 14:07:05,635 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:07:05,763 - distributed.scheduler - INFO - Remove client Client-08c94616-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:05,763 - distributed.scheduler - INFO - Remove client Client-08c94616-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:05,763 - distributed.scheduler - INFO - Close client connection: Client-08c94616-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:05,763 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:07:05,764 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:07:05,962 - distributed.utils_perf - WARNING - full garbage collections took 76% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_nanny.py::test_scheduler_file 2022-08-26 14:07:05,987 - distributed.scheduler - INFO - State start
-2022-08-26 14:07:05,988 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:07:05,989 - distributed.scheduler - INFO -   Scheduler at: tcp://192.168.1.159:43655
-2022-08-26 14:07:05,989 - distributed.scheduler - INFO -   dashboard at:                    :38901
-2022-08-26 14:07:05,992 - distributed.nanny - INFO -         Start Nanny at: 'tcp://192.168.1.159:46101'
-2022-08-26 14:07:06,607 - distributed.worker - INFO -       Start worker at:  tcp://192.168.1.159:37487
-2022-08-26 14:07:06,607 - distributed.worker - INFO -          Listening to:  tcp://192.168.1.159:37487
-2022-08-26 14:07:06,607 - distributed.worker - INFO -          dashboard at:        192.168.1.159:35559
-2022-08-26 14:07:06,607 - distributed.worker - INFO - Waiting to connect to:  tcp://192.168.1.159:43655
-2022-08-26 14:07:06,607 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:06,607 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:07:06,607 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:07:06,607 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-k57ghepj
-2022-08-26 14:07:06,607 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:06,872 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://192.168.1.159:37487', status: init, memory: 0, processing: 0>
-2022-08-26 14:07:06,872 - distributed.scheduler - INFO - Starting worker compute stream, tcp://192.168.1.159:37487
-2022-08-26 14:07:06,872 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:06,872 - distributed.worker - INFO -         Registered to:  tcp://192.168.1.159:43655
-2022-08-26 14:07:06,873 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:06,873 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:06,877 - distributed.nanny - INFO - Closing Nanny at 'tcp://192.168.1.159:46101'.
-2022-08-26 14:07:06,877 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:07:06,877 - distributed.worker - INFO - Stopping worker at tcp://192.168.1.159:37487
-2022-08-26 14:07:06,878 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-8de84bfc-4186-452b-bce3-32e453cde4a6 Address tcp://192.168.1.159:37487 Status: Status.closing
-2022-08-26 14:07:06,878 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://192.168.1.159:37487', status: closing, memory: 0, processing: 0>
-2022-08-26 14:07:06,878 - distributed.core - INFO - Removing comms to tcp://192.168.1.159:37487
-2022-08-26 14:07:06,878 - distributed.scheduler - INFO - Lost all workers
-PASSED
-distributed/tests/test_nanny.py::test_nanny_timeout 2022-08-26 14:07:07,011 - distributed.scheduler - INFO - State start
-2022-08-26 14:07:07,012 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:07:07,012 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:35173
-2022-08-26 14:07:07,012 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:33665
-2022-08-26 14:07:07,015 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:40229'
-2022-08-26 14:07:07,623 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:46653
-2022-08-26 14:07:07,623 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:46653
-2022-08-26 14:07:07,623 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:07:07,623 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41767
-2022-08-26 14:07:07,623 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35173
-2022-08-26 14:07:07,623 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:07,623 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:07:07,623 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:07:07,624 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-e0ajr9dg
-2022-08-26 14:07:07,624 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:07,871 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:46653', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:07:07,871 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:46653
-2022-08-26 14:07:07,871 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:07,871 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35173
-2022-08-26 14:07:07,871 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:07,872 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:07,902 - distributed.scheduler - INFO - Receive client connection: Client-0b4ea8d2-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:07,903 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:07,906 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:46653
-2022-08-26 14:07:07,907 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-b0cfdb61-87e5-4b24-8f72-e0527f5481b6 Address tcp://127.0.0.1:46653 Status: Status.closing
-2022-08-26 14:07:07,907 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:46653', name: 0, status: closing, memory: 1, processing: 0>
-2022-08-26 14:07:07,907 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:46653
-2022-08-26 14:07:07,907 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:07:08,007 - distributed.nanny - ERROR - Restart timed out after 0.1s; returning before finished
-2022-08-26 14:07:08,018 - distributed.scheduler - INFO - Remove client Client-0b4ea8d2-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:08,018 - distributed.scheduler - INFO - Remove client Client-0b4ea8d2-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:08,018 - distributed.scheduler - INFO - Close client connection: Client-0b4ea8d2-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:08,018 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:40229'.
-2022-08-26 14:07:08,035 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:07:08,035 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:07:08,236 - distributed.utils_perf - WARNING - full garbage collections took 76% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_nanny.py::test_throttle_outgoing_connections 2022-08-26 14:07:08,242 - distributed.scheduler - INFO - State start
-2022-08-26 14:07:08,244 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:07:08,244 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:36849
-2022-08-26 14:07:08,244 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:35595
-2022-08-26 14:07:08,259 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35049
-2022-08-26 14:07:08,259 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35049
-2022-08-26 14:07:08,259 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:07:08,259 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33643
-2022-08-26 14:07:08,259 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36849
-2022-08-26 14:07:08,259 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:08,259 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:07:08,259 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:07:08,259 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-i3z7v1ty
-2022-08-26 14:07:08,259 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:08,260 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39821
-2022-08-26 14:07:08,260 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39821
-2022-08-26 14:07:08,260 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:07:08,260 - distributed.worker - INFO -          dashboard at:            127.0.0.1:44757
-2022-08-26 14:07:08,260 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36849
-2022-08-26 14:07:08,260 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:08,260 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:07:08,260 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:07:08,260 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-l2q7wtwh
-2022-08-26 14:07:08,260 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:08,261 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40691
-2022-08-26 14:07:08,261 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40691
-2022-08-26 14:07:08,261 - distributed.worker - INFO -           Worker name:                          2
-2022-08-26 14:07:08,261 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39097
-2022-08-26 14:07:08,261 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36849
-2022-08-26 14:07:08,261 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:08,261 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:07:08,261 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:07:08,261 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ysemgd5s
-2022-08-26 14:07:08,261 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:08,262 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37207
-2022-08-26 14:07:08,262 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37207
-2022-08-26 14:07:08,262 - distributed.worker - INFO -           Worker name:                          3
-2022-08-26 14:07:08,262 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41261
-2022-08-26 14:07:08,262 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36849
-2022-08-26 14:07:08,262 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:08,262 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:07:08,262 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:07:08,262 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-zhmz93zz
-2022-08-26 14:07:08,262 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:08,263 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38277
-2022-08-26 14:07:08,263 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38277
-2022-08-26 14:07:08,263 - distributed.worker - INFO -           Worker name:                          4
-2022-08-26 14:07:08,263 - distributed.worker - INFO -          dashboard at:            127.0.0.1:37367
-2022-08-26 14:07:08,263 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36849
-2022-08-26 14:07:08,263 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:08,263 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:07:08,263 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:07:08,263 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-54jcpy_r
-2022-08-26 14:07:08,263 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:08,264 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45411
-2022-08-26 14:07:08,264 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45411
-2022-08-26 14:07:08,264 - distributed.worker - INFO -           Worker name:                          5
-2022-08-26 14:07:08,264 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41585
-2022-08-26 14:07:08,264 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36849
-2022-08-26 14:07:08,264 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:08,264 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:07:08,264 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:07:08,264 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ak_7z3yk
-2022-08-26 14:07:08,264 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:08,265 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40277
-2022-08-26 14:07:08,265 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40277
-2022-08-26 14:07:08,265 - distributed.worker - INFO -           Worker name:                          6
-2022-08-26 14:07:08,265 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35563
-2022-08-26 14:07:08,265 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36849
-2022-08-26 14:07:08,265 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:08,265 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:07:08,265 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:07:08,265 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-pzx7orm8
-2022-08-26 14:07:08,265 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:08,266 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:44205
-2022-08-26 14:07:08,266 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:44205
-2022-08-26 14:07:08,266 - distributed.worker - INFO -           Worker name:                          7
-2022-08-26 14:07:08,266 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45023
-2022-08-26 14:07:08,266 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36849
-2022-08-26 14:07:08,266 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:08,266 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:07:08,266 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:07:08,266 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-8b0wo5d3
-2022-08-26 14:07:08,266 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:08,276 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:35049', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:07:08,276 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:35049
-2022-08-26 14:07:08,276 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:08,276 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39821', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:07:08,277 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39821
-2022-08-26 14:07:08,277 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:08,277 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40691', name: 2, status: init, memory: 0, processing: 0>
-2022-08-26 14:07:08,277 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40691
-2022-08-26 14:07:08,277 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:08,278 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37207', name: 3, status: init, memory: 0, processing: 0>
-2022-08-26 14:07:08,278 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37207
-2022-08-26 14:07:08,278 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:08,278 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38277', name: 4, status: init, memory: 0, processing: 0>
-2022-08-26 14:07:08,278 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38277
-2022-08-26 14:07:08,279 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:08,279 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45411', name: 5, status: init, memory: 0, processing: 0>
-2022-08-26 14:07:08,279 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45411
-2022-08-26 14:07:08,279 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:08,279 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40277', name: 6, status: init, memory: 0, processing: 0>
-2022-08-26 14:07:08,280 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40277
-2022-08-26 14:07:08,280 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:08,280 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:44205', name: 7, status: init, memory: 0, processing: 0>
-2022-08-26 14:07:08,280 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:44205
-2022-08-26 14:07:08,280 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:08,281 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36849
-2022-08-26 14:07:08,281 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:08,281 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36849
-2022-08-26 14:07:08,281 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:08,281 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36849
-2022-08-26 14:07:08,282 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:08,282 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36849
-2022-08-26 14:07:08,282 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:08,282 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36849
-2022-08-26 14:07:08,282 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:08,282 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36849
-2022-08-26 14:07:08,282 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:08,283 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36849
-2022-08-26 14:07:08,283 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:08,283 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36849
-2022-08-26 14:07:08,283 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:08,283 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:08,283 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:08,283 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:08,284 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:08,284 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:08,284 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:08,284 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:08,284 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:08,298 - distributed.scheduler - INFO - Receive client connection: Client-0b8af7c8-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:08,298 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:08,321 - distributed.worker - DEBUG - Worker tcp://127.0.0.1:35049 has too many open connections to respond to data request from tcp://127.0.0.1:39821 (2/1). Throttling outgoing connections because worker is paused.
-2022-08-26 14:07:08,322 - distributed.worker - DEBUG - Worker tcp://127.0.0.1:35049 has too many open connections to respond to data request from tcp://127.0.0.1:39821 (2/1). Throttling outgoing connections because worker is paused.
-2022-08-26 14:07:08,323 - distributed.worker - DEBUG - Worker tcp://127.0.0.1:35049 has too many open connections to respond to data request from tcp://127.0.0.1:39821 (2/1). Throttling outgoing connections because worker is paused.
-2022-08-26 14:07:08,324 - distributed.worker - DEBUG - Worker tcp://127.0.0.1:35049 has too many open connections to respond to data request from tcp://127.0.0.1:39821 (2/1). Throttling outgoing connections because worker is paused.
-2022-08-26 14:07:08,325 - distributed.worker - DEBUG - Worker tcp://127.0.0.1:35049 has too many open connections to respond to data request from tcp://127.0.0.1:39821 (2/1). Throttling outgoing connections because worker is paused.
-2022-08-26 14:07:08,326 - distributed.worker - DEBUG - Worker tcp://127.0.0.1:35049 has too many open connections to respond to data request from tcp://127.0.0.1:39821 (2/1). Throttling outgoing connections because worker is paused.
-2022-08-26 14:07:08,327 - distributed.worker - DEBUG - Worker tcp://127.0.0.1:35049 has too many open connections to respond to data request from tcp://127.0.0.1:39821 (2/1). Throttling outgoing connections because worker is paused.
-2022-08-26 14:07:08,328 - distributed.worker - DEBUG - Worker tcp://127.0.0.1:35049 has too many open connections to respond to data request from tcp://127.0.0.1:39821 (2/1). Throttling outgoing connections because worker is paused.
-2022-08-26 14:07:08,329 - distributed.worker - DEBUG - Worker tcp://127.0.0.1:35049 has too many open connections to respond to data request from tcp://127.0.0.1:39821 (2/1). Throttling outgoing connections because worker is paused.
-2022-08-26 14:07:08,330 - distributed.worker - DEBUG - Worker tcp://127.0.0.1:35049 has too many open connections to respond to data request from tcp://127.0.0.1:39821 (2/1). Throttling outgoing connections because worker is paused.
-2022-08-26 14:07:08,330 - distributed.worker - DEBUG - Worker tcp://127.0.0.1:35049 has too many open connections to respond to data request from tcp://127.0.0.1:40691 (2/1). Throttling outgoing connections because worker is paused.
-2022-08-26 14:07:08,331 - distributed.worker - DEBUG - Worker tcp://127.0.0.1:35049 has too many open connections to respond to data request from tcp://127.0.0.1:40691 (2/1). Throttling outgoing connections because worker is paused.
-2022-08-26 14:07:08,332 - distributed.worker - DEBUG - Worker tcp://127.0.0.1:35049 has too many open connections to respond to data request from tcp://127.0.0.1:40691 (2/1). Throttling outgoing connections because worker is paused.
-2022-08-26 14:07:08,333 - distributed.worker - DEBUG - Worker tcp://127.0.0.1:35049 has too many open connections to respond to data request from tcp://127.0.0.1:40691 (2/1). Throttling outgoing connections because worker is paused.
-2022-08-26 14:07:08,334 - distributed.worker - DEBUG - Worker tcp://127.0.0.1:35049 has too many open connections to respond to data request from tcp://127.0.0.1:40691 (2/1). Throttling outgoing connections because worker is paused.
-2022-08-26 14:07:08,335 - distributed.worker - DEBUG - Worker tcp://127.0.0.1:35049 has too many open connections to respond to data request from tcp://127.0.0.1:40691 (2/1). Throttling outgoing connections because worker is paused.
-2022-08-26 14:07:08,336 - distributed.worker - DEBUG - Worker tcp://127.0.0.1:35049 has too many open connections to respond to data request from tcp://127.0.0.1:40691 (2/1). Throttling outgoing connections because worker is paused.
-2022-08-26 14:07:08,337 - distributed.worker - DEBUG - Worker tcp://127.0.0.1:35049 has too many open connections to respond to data request from tcp://127.0.0.1:40691 (2/1). Throttling outgoing connections because worker is paused.
-2022-08-26 14:07:08,338 - distributed.worker - DEBUG - Worker tcp://127.0.0.1:35049 has too many open connections to respond to data request from tcp://127.0.0.1:40691 (2/1). Throttling outgoing connections because worker is paused.
-2022-08-26 14:07:08,339 - distributed.worker - DEBUG - Worker tcp://127.0.0.1:35049 has too many open connections to respond to data request from tcp://127.0.0.1:40691 (2/1). Throttling outgoing connections because worker is paused.
-2022-08-26 14:07:08,340 - distributed.worker - DEBUG - Worker tcp://127.0.0.1:35049 has too many open connections to respond to data request from tcp://127.0.0.1:37207 (2/1). Throttling outgoing connections because worker is paused.
-2022-08-26 14:07:08,341 - distributed.worker - DEBUG - Worker tcp://127.0.0.1:35049 has too many open connections to respond to data request from tcp://127.0.0.1:37207 (2/1). Throttling outgoing connections because worker is paused.
-2022-08-26 14:07:08,342 - distributed.worker - DEBUG - Worker tcp://127.0.0.1:35049 has too many open connections to respond to data request from tcp://127.0.0.1:37207 (2/1). Throttling outgoing connections because worker is paused.
-2022-08-26 14:07:08,343 - distributed.worker - DEBUG - Worker tcp://127.0.0.1:35049 has too many open connections to respond to data request from tcp://127.0.0.1:37207 (2/1). Throttling outgoing connections because worker is paused.
-2022-08-26 14:07:08,344 - distributed.worker - DEBUG - Worker tcp://127.0.0.1:35049 has too many open connections to respond to data request from tcp://127.0.0.1:37207 (2/1). Throttling outgoing connections because worker is paused.
-2022-08-26 14:07:08,345 - distributed.worker - DEBUG - Worker tcp://127.0.0.1:35049 has too many open connections to respond to data request from tcp://127.0.0.1:37207 (2/1). Throttling outgoing connections because worker is paused.
-2022-08-26 14:07:08,346 - distributed.worker - DEBUG - Worker tcp://127.0.0.1:35049 has too many open connections to respond to data request from tcp://127.0.0.1:37207 (2/1). Throttling outgoing connections because worker is paused.
-2022-08-26 14:07:08,347 - distributed.worker - DEBUG - Worker tcp://127.0.0.1:35049 has too many open connections to respond to data request from tcp://127.0.0.1:37207 (2/1). Throttling outgoing connections because worker is paused.
-2022-08-26 14:07:08,348 - distributed.worker - DEBUG - Worker tcp://127.0.0.1:35049 has too many open connections to respond to data request from tcp://127.0.0.1:37207 (2/1). Throttling outgoing connections because worker is paused.
-2022-08-26 14:07:08,349 - distributed.worker - DEBUG - Worker tcp://127.0.0.1:35049 has too many open connections to respond to data request from tcp://127.0.0.1:37207 (2/1). Throttling outgoing connections because worker is paused.
-2022-08-26 14:07:08,350 - distributed.worker - DEBUG - Worker tcp://127.0.0.1:35049 has too many open connections to respond to data request from tcp://127.0.0.1:38277 (2/1). Throttling outgoing connections because worker is paused.
-2022-08-26 14:07:08,351 - distributed.worker - DEBUG - Worker tcp://127.0.0.1:35049 has too many open connections to respond to data request from tcp://127.0.0.1:38277 (2/1). Throttling outgoing connections because worker is paused.
-2022-08-26 14:07:08,352 - distributed.worker - DEBUG - Worker tcp://127.0.0.1:35049 has too many open connections to respond to data request from tcp://127.0.0.1:38277 (2/1). Throttling outgoing connections because worker is paused.
-2022-08-26 14:07:08,353 - distributed.worker - DEBUG - Worker tcp://127.0.0.1:35049 has too many open connections to respond to data request from tcp://127.0.0.1:38277 (2/1). Throttling outgoing connections because worker is paused.
-2022-08-26 14:07:08,355 - distributed.worker - DEBUG - Worker tcp://127.0.0.1:35049 has too many open connections to respond to data request from tcp://127.0.0.1:38277 (2/1). Throttling outgoing connections because worker is paused.
-2022-08-26 14:07:08,356 - distributed.worker - DEBUG - Worker tcp://127.0.0.1:35049 has too many open connections to respond to data request from tcp://127.0.0.1:38277 (2/1). Throttling outgoing connections because worker is paused.
-2022-08-26 14:07:08,357 - distributed.worker - DEBUG - Worker tcp://127.0.0.1:35049 has too many open connections to respond to data request from tcp://127.0.0.1:38277 (2/1). Throttling outgoing connections because worker is paused.
-2022-08-26 14:07:08,358 - distributed.worker - DEBUG - Worker tcp://127.0.0.1:35049 has too many open connections to respond to data request from tcp://127.0.0.1:38277 (2/1). Throttling outgoing connections because worker is paused.
-2022-08-26 14:07:08,359 - distributed.worker - DEBUG - Worker tcp://127.0.0.1:35049 has too many open connections to respond to data request from tcp://127.0.0.1:38277 (2/1). Throttling outgoing connections because worker is paused.
-2022-08-26 14:07:08,360 - distributed.worker - DEBUG - Worker tcp://127.0.0.1:35049 has too many open connections to respond to data request from tcp://127.0.0.1:38277 (2/1). Throttling outgoing connections because worker is paused.
-2022-08-26 14:07:08,361 - distributed.worker - DEBUG - Worker tcp://127.0.0.1:35049 has too many open connections to respond to data request from tcp://127.0.0.1:45411 (2/1). Throttling outgoing connections because worker is paused.
-2022-08-26 14:07:08,361 - distributed.worker - DEBUG - Worker tcp://127.0.0.1:35049 has too many open connections to respond to data request from tcp://127.0.0.1:45411 (2/1). Throttling outgoing connections because worker is paused.
-2022-08-26 14:07:08,362 - distributed.worker - DEBUG - Worker tcp://127.0.0.1:35049 has too many open connections to respond to data request from tcp://127.0.0.1:45411 (2/1). Throttling outgoing connections because worker is paused.
-2022-08-26 14:07:08,363 - distributed.worker - DEBUG - Worker tcp://127.0.0.1:35049 has too many open connections to respond to data request from tcp://127.0.0.1:45411 (2/1). Throttling outgoing connections because worker is paused.
-2022-08-26 14:07:08,364 - distributed.worker - DEBUG - Worker tcp://127.0.0.1:35049 has too many open connections to respond to data request from tcp://127.0.0.1:45411 (2/1). Throttling outgoing connections because worker is paused.
-2022-08-26 14:07:08,365 - distributed.worker - DEBUG - Worker tcp://127.0.0.1:35049 has too many open connections to respond to data request from tcp://127.0.0.1:45411 (2/1). Throttling outgoing connections because worker is paused.
-2022-08-26 14:07:08,366 - distributed.worker - DEBUG - Worker tcp://127.0.0.1:35049 has too many open connections to respond to data request from tcp://127.0.0.1:45411 (2/1). Throttling outgoing connections because worker is paused.
-2022-08-26 14:07:08,367 - distributed.worker - DEBUG - Worker tcp://127.0.0.1:35049 has too many open connections to respond to data request from tcp://127.0.0.1:45411 (2/1). Throttling outgoing connections because worker is paused.
-2022-08-26 14:07:08,368 - distributed.worker - DEBUG - Worker tcp://127.0.0.1:35049 has too many open connections to respond to data request from tcp://127.0.0.1:45411 (2/1). Throttling outgoing connections because worker is paused.
-2022-08-26 14:07:08,369 - distributed.worker - DEBUG - Worker tcp://127.0.0.1:35049 has too many open connections to respond to data request from tcp://127.0.0.1:45411 (2/1). Throttling outgoing connections because worker is paused.
-2022-08-26 14:07:08,370 - distributed.worker - DEBUG - Worker tcp://127.0.0.1:35049 has too many open connections to respond to data request from tcp://127.0.0.1:40277 (2/1). Throttling outgoing connections because worker is paused.
-2022-08-26 14:07:08,371 - distributed.worker - DEBUG - Worker tcp://127.0.0.1:35049 has too many open connections to respond to data request from tcp://127.0.0.1:40277 (2/1). Throttling outgoing connections because worker is paused.
-2022-08-26 14:07:08,372 - distributed.worker - DEBUG - Worker tcp://127.0.0.1:35049 has too many open connections to respond to data request from tcp://127.0.0.1:40277 (2/1). Throttling outgoing connections because worker is paused.
-2022-08-26 14:07:08,373 - distributed.worker - DEBUG - Worker tcp://127.0.0.1:35049 has too many open connections to respond to data request from tcp://127.0.0.1:40277 (2/1). Throttling outgoing connections because worker is paused.
-2022-08-26 14:07:08,374 - distributed.worker - DEBUG - Worker tcp://127.0.0.1:35049 has too many open connections to respond to data request from tcp://127.0.0.1:40277 (2/1). Throttling outgoing connections because worker is paused.
-2022-08-26 14:07:08,375 - distributed.worker - DEBUG - Worker tcp://127.0.0.1:35049 has too many open connections to respond to data request from tcp://127.0.0.1:40277 (2/1). Throttling outgoing connections because worker is paused.
-2022-08-26 14:07:08,376 - distributed.worker - DEBUG - Worker tcp://127.0.0.1:35049 has too many open connections to respond to data request from tcp://127.0.0.1:40277 (2/1). Throttling outgoing connections because worker is paused.
-2022-08-26 14:07:08,377 - distributed.worker - DEBUG - Worker tcp://127.0.0.1:35049 has too many open connections to respond to data request from tcp://127.0.0.1:40277 (2/1). Throttling outgoing connections because worker is paused.
-2022-08-26 14:07:08,378 - distributed.worker - DEBUG - Worker tcp://127.0.0.1:35049 has too many open connections to respond to data request from tcp://127.0.0.1:40277 (2/1). Throttling outgoing connections because worker is paused.
-2022-08-26 14:07:08,379 - distributed.worker - DEBUG - Worker tcp://127.0.0.1:35049 has too many open connections to respond to data request from tcp://127.0.0.1:40277 (2/1). Throttling outgoing connections because worker is paused.
-2022-08-26 14:07:08,380 - distributed.worker - DEBUG - Worker tcp://127.0.0.1:35049 has too many open connections to respond to data request from tcp://127.0.0.1:44205 (2/1). Throttling outgoing connections because worker is paused.
-2022-08-26 14:07:08,381 - distributed.worker - DEBUG - Worker tcp://127.0.0.1:35049 has too many open connections to respond to data request from tcp://127.0.0.1:44205 (2/1). Throttling outgoing connections because worker is paused.
-2022-08-26 14:07:08,382 - distributed.worker - DEBUG - Worker tcp://127.0.0.1:35049 has too many open connections to respond to data request from tcp://127.0.0.1:44205 (2/1). Throttling outgoing connections because worker is paused.
-2022-08-26 14:07:08,384 - distributed.worker - DEBUG - Worker tcp://127.0.0.1:35049 has too many open connections to respond to data request from tcp://127.0.0.1:44205 (2/1). Throttling outgoing connections because worker is paused.
-2022-08-26 14:07:08,385 - distributed.worker - DEBUG - Worker tcp://127.0.0.1:35049 has too many open connections to respond to data request from tcp://127.0.0.1:44205 (2/1). Throttling outgoing connections because worker is paused.
-2022-08-26 14:07:08,386 - distributed.worker - DEBUG - Worker tcp://127.0.0.1:35049 has too many open connections to respond to data request from tcp://127.0.0.1:44205 (2/1). Throttling outgoing connections because worker is paused.
-2022-08-26 14:07:08,386 - distributed.worker - DEBUG - Worker tcp://127.0.0.1:35049 has too many open connections to respond to data request from tcp://127.0.0.1:44205 (2/1). Throttling outgoing connections because worker is paused.
-2022-08-26 14:07:08,387 - distributed.worker - DEBUG - Worker tcp://127.0.0.1:35049 has too many open connections to respond to data request from tcp://127.0.0.1:44205 (2/1). Throttling outgoing connections because worker is paused.
-2022-08-26 14:07:08,388 - distributed.worker - DEBUG - Worker tcp://127.0.0.1:35049 has too many open connections to respond to data request from tcp://127.0.0.1:44205 (2/1). Throttling outgoing connections because worker is paused.
-2022-08-26 14:07:08,389 - distributed.worker - DEBUG - Worker tcp://127.0.0.1:35049 has too many open connections to respond to data request from tcp://127.0.0.1:44205 (2/1). Throttling outgoing connections because worker is paused.
-2022-08-26 14:07:08,404 - distributed.scheduler - INFO - Remove client Client-0b8af7c8-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:08,404 - distributed.scheduler - INFO - Remove client Client-0b8af7c8-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:08,404 - distributed.scheduler - INFO - Close client connection: Client-0b8af7c8-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:08,405 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:35049
-2022-08-26 14:07:08,405 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:39821
-2022-08-26 14:07:08,405 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:40691
-2022-08-26 14:07:08,406 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:37207
-2022-08-26 14:07:08,406 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:38277
-2022-08-26 14:07:08,406 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:45411
-2022-08-26 14:07:08,406 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:40277
-2022-08-26 14:07:08,407 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:44205
-2022-08-26 14:07:08,409 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:35049', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:07:08,409 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:35049
-2022-08-26 14:07:08,409 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:39821', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:07:08,409 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:39821
-2022-08-26 14:07:08,409 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:40691', name: 2, status: closing, memory: 0, processing: 0>
-2022-08-26 14:07:08,410 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:40691
-2022-08-26 14:07:08,410 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:37207', name: 3, status: closing, memory: 0, processing: 0>
-2022-08-26 14:07:08,410 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:37207
-2022-08-26 14:07:08,410 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:38277', name: 4, status: closing, memory: 0, processing: 0>
-2022-08-26 14:07:08,410 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:38277
-2022-08-26 14:07:08,410 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:45411', name: 5, status: closing, memory: 0, processing: 0>
-2022-08-26 14:07:08,410 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:45411
-2022-08-26 14:07:08,410 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:40277', name: 6, status: closing, memory: 0, processing: 0>
-2022-08-26 14:07:08,410 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:40277
-2022-08-26 14:07:08,411 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:44205', name: 7, status: closing, memory: 0, processing: 0>
-2022-08-26 14:07:08,411 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:44205
-2022-08-26 14:07:08,411 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:07:08,411 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-694c67b1-958a-48de-96b1-44037b7e823b Address tcp://127.0.0.1:35049 Status: Status.closing
-2022-08-26 14:07:08,411 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-2236f350-4ee0-4d6b-a5ea-6dbe9c270368 Address tcp://127.0.0.1:39821 Status: Status.closing
-2022-08-26 14:07:08,411 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-73f40a54-045f-40d4-9361-4fd23c1d91ce Address tcp://127.0.0.1:40691 Status: Status.closing
-2022-08-26 14:07:08,411 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-8d1d8e3c-f538-4b62-b87b-068c78099848 Address tcp://127.0.0.1:37207 Status: Status.closing
-2022-08-26 14:07:08,412 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-34bc2f7f-d0b9-45c0-989e-9d96549e373d Address tcp://127.0.0.1:38277 Status: Status.closing
-2022-08-26 14:07:08,412 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-f22781fa-3f80-481a-8137-c47fac821865 Address tcp://127.0.0.1:45411 Status: Status.closing
-2022-08-26 14:07:08,412 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-94384de3-74dc-4fda-9e36-c3997c979207 Address tcp://127.0.0.1:40277 Status: Status.closing
-2022-08-26 14:07:08,412 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-9f2459f9-38d1-4b0e-9da1-24c78ae1cf80 Address tcp://127.0.0.1:44205 Status: Status.closing
-2022-08-26 14:07:08,420 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:07:08,421 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:07:08,623 - distributed.utils_perf - WARNING - full garbage collections took 76% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_nanny.py::test_scheduler_address_config 2022-08-26 14:07:08,630 - distributed.scheduler - INFO - State start
-2022-08-26 14:07:08,631 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:07:08,631 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:46123
-2022-08-26 14:07:08,631 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:41789
-2022-08-26 14:07:08,634 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:41465'
-2022-08-26 14:07:09,256 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42863
-2022-08-26 14:07:09,256 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42863
-2022-08-26 14:07:09,256 - distributed.worker - INFO -          dashboard at:            127.0.0.1:37027
-2022-08-26 14:07:09,256 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:46123
-2022-08-26 14:07:09,256 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:09,256 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:07:09,256 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:07:09,256 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-rgh55m3d
-2022-08-26 14:07:09,257 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:09,526 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42863', status: init, memory: 0, processing: 0>
-2022-08-26 14:07:09,526 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42863
-2022-08-26 14:07:09,526 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:09,526 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:46123
-2022-08-26 14:07:09,526 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:09,527 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:09,570 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:41465'.
-2022-08-26 14:07:09,570 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:07:09,570 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42863
-2022-08-26 14:07:09,571 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-ace9cdd6-7b1a-4aa1-88f2-2acb4bfd09b8 Address tcp://127.0.0.1:42863 Status: Status.closing
-2022-08-26 14:07:09,571 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:42863', status: closing, memory: 0, processing: 0>
-2022-08-26 14:07:09,571 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:42863
-2022-08-26 14:07:09,571 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:07:09,700 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:07:09,700 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:07:09,901 - distributed.utils_perf - WARNING - full garbage collections took 76% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_nanny.py::test_wait_for_scheduler SKIPPED (ne...)
-distributed/tests/test_nanny.py::test_environment_variable 2022-08-26 14:07:09,907 - distributed.scheduler - INFO - State start
-2022-08-26 14:07:09,909 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:07:09,909 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:42795
-2022-08-26 14:07:09,909 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:36961
-2022-08-26 14:07:09,912 - distributed.scheduler - INFO - Receive client connection: Client-0c8149c2-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:09,912 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:09,918 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:40771'
-2022-08-26 14:07:09,919 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:44633'
-2022-08-26 14:07:10,541 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34981
-2022-08-26 14:07:10,541 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34981
-2022-08-26 14:07:10,541 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42027
-2022-08-26 14:07:10,541 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:42795
-2022-08-26 14:07:10,541 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:10,541 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:07:10,541 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-qwqvroex
-2022-08-26 14:07:10,541 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:10,542 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40559
-2022-08-26 14:07:10,542 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40559
-2022-08-26 14:07:10,542 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39801
-2022-08-26 14:07:10,542 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:42795
-2022-08-26 14:07:10,542 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:10,542 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:07:10,542 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-d7a6pf86
-2022-08-26 14:07:10,542 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:10,792 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:34981', status: init, memory: 0, processing: 0>
-2022-08-26 14:07:10,792 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:34981
-2022-08-26 14:07:10,792 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:10,793 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:42795
-2022-08-26 14:07:10,793 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:10,794 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:10,809 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40559', status: init, memory: 0, processing: 0>
-2022-08-26 14:07:10,809 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40559
-2022-08-26 14:07:10,809 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:10,810 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:42795
-2022-08-26 14:07:10,810 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:10,810 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:10,859 - distributed.worker - INFO - Run out-of-band function 'lambda'
-2022-08-26 14:07:10,859 - distributed.worker - INFO - Run out-of-band function 'lambda'
-2022-08-26 14:07:10,860 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:40771'.
-2022-08-26 14:07:10,860 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:07:10,860 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:44633'.
-2022-08-26 14:07:10,860 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:07:10,861 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:34981
-2022-08-26 14:07:10,861 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:40559
-2022-08-26 14:07:10,861 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-2ed04cdf-2b57-4c97-bdf6-e0c8ffe6fe16 Address tcp://127.0.0.1:34981 Status: Status.closing
-2022-08-26 14:07:10,862 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-a9abcd22-fc60-4423-b2cc-7108de0383cb Address tcp://127.0.0.1:40559 Status: Status.closing
-2022-08-26 14:07:10,862 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:34981', status: closing, memory: 0, processing: 0>
-2022-08-26 14:07:10,862 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:34981
-2022-08-26 14:07:10,862 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:40559', status: closing, memory: 0, processing: 0>
-2022-08-26 14:07:10,862 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:40559
-2022-08-26 14:07:10,862 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:07:11,003 - distributed.scheduler - INFO - Remove client Client-0c8149c2-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:11,003 - distributed.scheduler - INFO - Remove client Client-0c8149c2-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:11,003 - distributed.scheduler - INFO - Close client connection: Client-0c8149c2-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:11,004 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:07:11,004 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:07:11,203 - distributed.utils_perf - WARNING - full garbage collections took 77% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_nanny.py::test_environment_variable_by_config 2022-08-26 14:07:11,209 - distributed.scheduler - INFO - State start
-2022-08-26 14:07:11,211 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:07:11,211 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:34775
-2022-08-26 14:07:11,211 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:39341
-2022-08-26 14:07:11,214 - distributed.scheduler - INFO - Receive client connection: Client-0d47f1b5-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:11,214 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:11,223 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:34933'
-2022-08-26 14:07:11,224 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:45117'
-2022-08-26 14:07:11,225 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:43201'
-2022-08-26 14:07:11,859 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:44273
-2022-08-26 14:07:11,859 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:44273
-2022-08-26 14:07:11,859 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39791
-2022-08-26 14:07:11,859 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34775
-2022-08-26 14:07:11,859 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:11,859 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:07:11,859 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-co7aelqr
-2022-08-26 14:07:11,860 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:11,866 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33131
-2022-08-26 14:07:11,866 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33131
-2022-08-26 14:07:11,866 - distributed.worker - INFO -          dashboard at:            127.0.0.1:44597
-2022-08-26 14:07:11,866 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34775
-2022-08-26 14:07:11,866 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:11,866 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:07:11,866 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-zixq1w1p
-2022-08-26 14:07:11,866 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:11,868 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33365
-2022-08-26 14:07:11,869 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33365
-2022-08-26 14:07:11,869 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42159
-2022-08-26 14:07:11,869 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34775
-2022-08-26 14:07:11,869 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:11,869 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:07:11,869 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ji13pux_
-2022-08-26 14:07:11,869 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:12,123 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33365', status: init, memory: 0, processing: 0>
-2022-08-26 14:07:12,124 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33365
-2022-08-26 14:07:12,124 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:12,124 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34775
-2022-08-26 14:07:12,124 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:12,125 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:12,128 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33131', status: init, memory: 0, processing: 0>
-2022-08-26 14:07:12,128 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33131
-2022-08-26 14:07:12,128 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:12,128 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34775
-2022-08-26 14:07:12,128 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:12,129 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:12,149 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:44273', status: init, memory: 0, processing: 0>
-2022-08-26 14:07:12,149 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:44273
-2022-08-26 14:07:12,149 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:12,149 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34775
-2022-08-26 14:07:12,150 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:12,150 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:12,169 - distributed.worker - INFO - Run out-of-band function 'lambda'
-2022-08-26 14:07:12,170 - distributed.worker - INFO - Run out-of-band function 'lambda'
-2022-08-26 14:07:12,170 - distributed.worker - INFO - Run out-of-band function 'lambda'
-2022-08-26 14:07:12,170 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:34933'.
-2022-08-26 14:07:12,171 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:07:12,171 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:45117'.
-2022-08-26 14:07:12,171 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:07:12,171 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:33131
-2022-08-26 14:07:12,171 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:43201'.
-2022-08-26 14:07:12,171 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:07:12,172 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:44273
-2022-08-26 14:07:12,172 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-4a0cd498-a3aa-4ed2-983d-3169298e5cb1 Address tcp://127.0.0.1:33131 Status: Status.closing
-2022-08-26 14:07:12,172 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:33365
-2022-08-26 14:07:12,172 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:33131', status: closing, memory: 0, processing: 0>
-2022-08-26 14:07:12,172 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:33131
-2022-08-26 14:07:12,172 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-6215d5bd-2637-4f19-b223-60271e33cb96 Address tcp://127.0.0.1:44273 Status: Status.closing
-2022-08-26 14:07:12,172 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:44273', status: closing, memory: 0, processing: 0>
-2022-08-26 14:07:12,173 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:44273
-2022-08-26 14:07:12,173 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-a510c054-d408-4489-84ae-9c1ab9012e24 Address tcp://127.0.0.1:33365 Status: Status.closing
-2022-08-26 14:07:12,173 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:33365', status: closing, memory: 0, processing: 0>
-2022-08-26 14:07:12,173 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:33365
-2022-08-26 14:07:12,173 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:07:12,344 - distributed.scheduler - INFO - Remove client Client-0d47f1b5-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:12,344 - distributed.scheduler - INFO - Remove client Client-0d47f1b5-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:12,344 - distributed.scheduler - INFO - Close client connection: Client-0d47f1b5-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:12,345 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:07:12,345 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:07:12,545 - distributed.utils_perf - WARNING - full garbage collections took 78% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_nanny.py::test_environment_variable_config 2022-08-26 14:07:12,551 - distributed.scheduler - INFO - State start
-2022-08-26 14:07:12,552 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:07:12,553 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:33637
-2022-08-26 14:07:12,553 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:41789
-2022-08-26 14:07:12,556 - distributed.scheduler - INFO - Receive client connection: Client-0e14b4a1-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:12,556 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:12,560 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:35993'
-2022-08-26 14:07:13,178 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43743
-2022-08-26 14:07:13,178 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43743
-2022-08-26 14:07:13,178 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46847
-2022-08-26 14:07:13,178 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33637
-2022-08-26 14:07:13,178 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:13,178 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:07:13,178 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:07:13,178 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-6xbktpbn
-2022-08-26 14:07:13,178 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:13,439 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:43743', status: init, memory: 0, processing: 0>
-2022-08-26 14:07:13,439 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:43743
-2022-08-26 14:07:13,439 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:13,440 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33637
-2022-08-26 14:07:13,440 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:13,440 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:13,447 - distributed.worker - INFO - Run out-of-band function 'lambda'
-2022-08-26 14:07:13,448 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:35993'.
-2022-08-26 14:07:13,448 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:07:13,449 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:43743
-2022-08-26 14:07:13,449 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-798aa1bf-f74d-4f84-9d8b-9ddeb9daf679 Address tcp://127.0.0.1:43743 Status: Status.closing
-2022-08-26 14:07:13,450 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:43743', status: closing, memory: 0, processing: 0>
-2022-08-26 14:07:13,450 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:43743
-2022-08-26 14:07:13,450 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:07:13,578 - distributed.scheduler - INFO - Remove client Client-0e14b4a1-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:13,578 - distributed.scheduler - INFO - Remove client Client-0e14b4a1-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:13,578 - distributed.scheduler - INFO - Close client connection: Client-0e14b4a1-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:13,579 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:07:13,579 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:07:13,777 - distributed.utils_perf - WARNING - full garbage collections took 79% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_nanny.py::test_environment_variable_pre_post_spawn 2022-08-26 14:07:13,783 - distributed.scheduler - INFO - State start
-2022-08-26 14:07:13,784 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:07:13,785 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:36623
-2022-08-26 14:07:13,785 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:38411
-2022-08-26 14:07:13,788 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:35601'
-2022-08-26 14:07:14,410 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45039
-2022-08-26 14:07:14,410 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45039
-2022-08-26 14:07:14,410 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:07:14,410 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46005
-2022-08-26 14:07:14,410 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36623
-2022-08-26 14:07:14,410 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:14,410 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:07:14,410 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:07:14,410 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-h4cghz6f
-2022-08-26 14:07:14,411 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:14,677 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45039', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:07:14,678 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45039
-2022-08-26 14:07:14,678 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:14,678 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36623
-2022-08-26 14:07:14,678 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:14,678 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:14,727 - distributed.scheduler - INFO - Receive client connection: Client-0f5ff1d9-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:14,727 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:14,729 - distributed.worker - INFO - Run out-of-band function 'lambda'
-2022-08-26 14:07:14,738 - distributed.scheduler - INFO - Remove client Client-0f5ff1d9-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:14,738 - distributed.scheduler - INFO - Remove client Client-0f5ff1d9-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:14,738 - distributed.scheduler - INFO - Close client connection: Client-0f5ff1d9-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:14,739 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:35601'.
-2022-08-26 14:07:14,739 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:07:14,739 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:45039
-2022-08-26 14:07:14,740 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-a7bae326-529e-4cb7-bb01-2a9f9fd61781 Address tcp://127.0.0.1:45039 Status: Status.closing
-2022-08-26 14:07:14,740 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:45039', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:07:14,740 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:45039
-2022-08-26 14:07:14,740 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:07:14,870 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:07:14,870 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:07:15,069 - distributed.utils_perf - WARNING - full garbage collections took 79% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_nanny.py::test_local_directory 2022-08-26 14:07:15,075 - distributed.scheduler - INFO - State start
-2022-08-26 14:07:15,076 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:07:15,077 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:37329
-2022-08-26 14:07:15,077 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:40047
-2022-08-26 14:07:15,080 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:43515'
-2022-08-26 14:07:15,689 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34827
-2022-08-26 14:07:15,689 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34827
-2022-08-26 14:07:15,689 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40585
-2022-08-26 14:07:15,689 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37329
-2022-08-26 14:07:15,689 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:15,689 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:07:15,689 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:07:15,689 - distributed.worker - INFO -       Local Directory: /tmp/tmph_zld44m./dask-worker-space/worker-s9mbrk7z
-2022-08-26 14:07:15,689 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:15,956 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:34827', status: init, memory: 0, processing: 0>
-2022-08-26 14:07:15,956 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:34827
-2022-08-26 14:07:15,956 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:15,956 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37329
-2022-08-26 14:07:15,957 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:15,957 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:15,965 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:43515'.
-2022-08-26 14:07:15,965 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:07:15,965 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:34827
-2022-08-26 14:07:15,966 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-45464884-b181-4b1a-94d9-1ddd68dd1bb3 Address tcp://127.0.0.1:34827 Status: Status.closing
-2022-08-26 14:07:15,966 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:34827', status: closing, memory: 0, processing: 0>
-2022-08-26 14:07:15,966 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:34827
-2022-08-26 14:07:15,966 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:07:16,094 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:07:16,095 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:07:16,294 - distributed.utils_perf - WARNING - full garbage collections took 79% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_nanny.py::test_mp_process_worker_no_daemon 2022-08-26 14:07:16,300 - distributed.scheduler - INFO - State start
-2022-08-26 14:07:16,301 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:07:16,302 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:37625
-2022-08-26 14:07:16,302 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:45263
-2022-08-26 14:07:16,305 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:36573'
-2022-08-26 14:07:16,926 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45105
-2022-08-26 14:07:16,926 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45105
-2022-08-26 14:07:16,926 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:07:16,926 - distributed.worker - INFO -          dashboard at:            127.0.0.1:44671
-2022-08-26 14:07:16,927 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37625
-2022-08-26 14:07:16,927 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:16,927 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:07:16,927 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:07:16,927 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-zvz_mmyu
-2022-08-26 14:07:16,927 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:17,192 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45105', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:07:17,193 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45105
-2022-08-26 14:07:17,193 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:17,193 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37625
-2022-08-26 14:07:17,193 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:17,194 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:17,243 - distributed.scheduler - INFO - Receive client connection: Client-10dfe726-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:17,244 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:18,440 - distributed.scheduler - INFO - Remove client Client-10dfe726-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:18,440 - distributed.scheduler - INFO - Remove client Client-10dfe726-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:18,441 - distributed.scheduler - INFO - Close client connection: Client-10dfe726-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:18,441 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:36573'.
-2022-08-26 14:07:18,441 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:07:18,442 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:45105
-2022-08-26 14:07:18,442 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-989a31d1-5456-4e86-809b-2f8203d0db2d Address tcp://127.0.0.1:45105 Status: Status.closing
-2022-08-26 14:07:18,443 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:45105', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:07:18,443 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:45105
-2022-08-26 14:07:18,443 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:07:18,613 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:07:18,613 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:07:18,812 - distributed.utils_perf - WARNING - full garbage collections took 81% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_nanny.py::test_mp_pool_worker_no_daemon 2022-08-26 14:07:18,818 - distributed.scheduler - INFO - State start
-2022-08-26 14:07:18,819 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:07:18,819 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:40891
-2022-08-26 14:07:18,820 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:45305
-2022-08-26 14:07:18,823 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:39937'
-2022-08-26 14:07:19,429 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:41633
-2022-08-26 14:07:19,429 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:41633
-2022-08-26 14:07:19,429 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:07:19,429 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46683
-2022-08-26 14:07:19,429 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:40891
-2022-08-26 14:07:19,429 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:19,429 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:07:19,429 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:07:19,429 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-98isuk40
-2022-08-26 14:07:19,429 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:19,692 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:41633', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:07:19,692 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:41633
-2022-08-26 14:07:19,692 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:19,693 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:40891
-2022-08-26 14:07:19,693 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:19,693 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:19,711 - distributed.scheduler - INFO - Receive client connection: Client-12587587-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:19,711 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:20,922 - distributed.scheduler - INFO - Remove client Client-12587587-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:20,922 - distributed.scheduler - INFO - Remove client Client-12587587-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:20,922 - distributed.scheduler - INFO - Close client connection: Client-12587587-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:20,923 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:39937'.
-2022-08-26 14:07:20,923 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:07:20,923 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:41633
-2022-08-26 14:07:20,924 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-23245792-f1e4-4e89-8999-34e183e85c0b Address tcp://127.0.0.1:41633 Status: Status.closing
-2022-08-26 14:07:20,924 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:41633', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:07:20,924 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:41633
-2022-08-26 14:07:20,924 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:07:21,094 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:07:21,094 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:07:21,295 - distributed.utils_perf - WARNING - full garbage collections took 81% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_nanny.py::test_nanny_closes_cleanly 2022-08-26 14:07:21,300 - distributed.scheduler - INFO - State start
-2022-08-26 14:07:21,302 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:07:21,302 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:33309
-2022-08-26 14:07:21,302 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:43345
-2022-08-26 14:07:21,305 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:35507'
-2022-08-26 14:07:21,928 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:32953
-2022-08-26 14:07:21,928 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:32953
-2022-08-26 14:07:21,928 - distributed.worker - INFO -          dashboard at:            127.0.0.1:44757
-2022-08-26 14:07:21,928 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33309
-2022-08-26 14:07:21,928 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:21,928 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:07:21,928 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:07:21,928 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-8u853u9q
-2022-08-26 14:07:21,928 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:22,197 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:32953', status: init, memory: 0, processing: 0>
-2022-08-26 14:07:22,197 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:32953
-2022-08-26 14:07:22,197 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:22,197 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33309
-2022-08-26 14:07:22,197 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:22,198 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:22,241 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:35507'.
-2022-08-26 14:07:22,241 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:07:22,242 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:32953
-2022-08-26 14:07:22,242 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-44fa0e88-e8a6-46a8-96d4-ca5078bc3835 Address tcp://127.0.0.1:32953 Status: Status.closing
-2022-08-26 14:07:22,243 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:32953', status: closing, memory: 0, processing: 0>
-2022-08-26 14:07:22,243 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:32953
-2022-08-26 14:07:22,243 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:07:22,373 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:07:22,373 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:07:22,575 - distributed.utils_perf - WARNING - full garbage collections took 81% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_nanny.py::test_lifetime SKIPPED (need --runsl...)
-distributed/tests/test_nanny.py::test_nanny_closes_cleanly_if_worker_is_terminated 2022-08-26 14:07:22,581 - distributed.scheduler - INFO - State start
-2022-08-26 14:07:22,583 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:07:22,583 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:44841
-2022-08-26 14:07:22,583 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:46671
-2022-08-26 14:07:22,586 - distributed.scheduler - INFO - Receive client connection: Client-140f36a0-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:22,587 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:22,590 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:40339'
-2022-08-26 14:07:23,211 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38523
-2022-08-26 14:07:23,211 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38523
-2022-08-26 14:07:23,211 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43125
-2022-08-26 14:07:23,212 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44841
-2022-08-26 14:07:23,212 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:23,212 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:07:23,212 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:07:23,212 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-59ctig6q
-2022-08-26 14:07:23,212 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:23,481 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38523', status: init, memory: 0, processing: 0>
-2022-08-26 14:07:23,482 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38523
-2022-08-26 14:07:23,482 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:23,482 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44841
-2022-08-26 14:07:23,482 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:23,483 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:23,527 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:38523
-2022-08-26 14:07:23,529 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-fec3daa7-17ed-4e44-9236-6c2312d49c07 Address tcp://127.0.0.1:38523 Status: Status.closing
-2022-08-26 14:07:23,529 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:38523', status: closing, memory: 0, processing: 0>
-2022-08-26 14:07:23,530 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:38523
-2022-08-26 14:07:23,530 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:07:23,530 - distributed.nanny - INFO - Worker closed
-2022-08-26 14:07:23,530 - distributed.nanny - ERROR - Worker process died unexpectedly
-2022-08-26 14:07:23,530 - tornado.application - ERROR - Exception in callback functools.partial(<bound method IOLoop._discard_future_result of <tornado.platform.asyncio.AsyncIOMainLoop object at 0x564040bb3210>>, <Task finished name='Task-137484' coro=<PooledRPCCall.__getattr__.<locals>.send_recv_from_rpc() done, defined at /home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py:1146> exception=CommClosedError('in <TCP (closed) ConnectionPool.terminate local=tcp://127.0.0.1:50590 remote=tcp://127.0.0.1:38523>: Stream is closed')>)
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 225, in read
-    frames_nbytes = await stream.read_bytes(fmt_size)
-tornado.iostream.StreamClosedError: Stream is closed
-
-The above exception was the direct cause of the following exception:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/tornado/ioloop.py", line 740, in _run_callback
-    ret = callback()
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/tornado/ioloop.py", line 764, in _discard_future_result
-    future.result()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 1154, in send_recv_from_rpc
-    return await send_recv(comm=comm, op=key, **kwargs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 919, in send_recv
-    response = await comm.read(deserializers=deserializers)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 241, in read
-    convert_stream_closed_error(self, e)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 144, in convert_stream_closed_error
-    raise CommClosedError(f"in {obj}: {exc}") from exc
-distributed.comm.core.CommClosedError: in <TCP (closed) ConnectionPool.terminate local=tcp://127.0.0.1:50590 remote=tcp://127.0.0.1:38523>: Stream is closed
-2022-08-26 14:07:23,658 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:40339'.
-2022-08-26 14:07:23,659 - distributed.scheduler - INFO - Remove client Client-140f36a0-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:23,659 - distributed.scheduler - INFO - Remove client Client-140f36a0-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:23,659 - distributed.scheduler - INFO - Close client connection: Client-140f36a0-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:23,660 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:07:23,660 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:07:23,859 - distributed.utils_perf - WARNING - full garbage collections took 81% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_nanny.py::test_config 2022-08-26 14:07:23,865 - distributed.scheduler - INFO - State start
-2022-08-26 14:07:23,867 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:07:23,867 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:36129
-2022-08-26 14:07:23,867 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:43035
-2022-08-26 14:07:23,870 - distributed.scheduler - INFO - Receive client connection: Client-14d3181a-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:23,870 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:23,874 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:32911'
-2022-08-26 14:07:24,480 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39053
-2022-08-26 14:07:24,480 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39053
-2022-08-26 14:07:24,480 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33591
-2022-08-26 14:07:24,480 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36129
-2022-08-26 14:07:24,481 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:24,481 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:07:24,481 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:07:24,481 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-4a68wlh5
-2022-08-26 14:07:24,481 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:24,743 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39053', status: init, memory: 0, processing: 0>
-2022-08-26 14:07:24,743 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39053
-2022-08-26 14:07:24,743 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:24,743 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36129
-2022-08-26 14:07:24,744 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:24,744 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:24,758 - distributed.worker - INFO - Run out-of-band function 'get'
-2022-08-26 14:07:24,759 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:32911'.
-2022-08-26 14:07:24,759 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:07:24,759 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:39053
-2022-08-26 14:07:24,760 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-fadfa805-edf5-47ee-812c-46398bedc29b Address tcp://127.0.0.1:39053 Status: Status.closing
-2022-08-26 14:07:24,760 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:39053', status: closing, memory: 0, processing: 0>
-2022-08-26 14:07:24,760 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:39053
-2022-08-26 14:07:24,760 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:07:24,887 - distributed.scheduler - INFO - Remove client Client-14d3181a-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:24,888 - distributed.scheduler - INFO - Remove client Client-14d3181a-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:24,888 - distributed.scheduler - INFO - Close client connection: Client-14d3181a-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:24,888 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:07:24,888 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:07:25,087 - distributed.utils_perf - WARNING - full garbage collections took 81% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_nanny.py::test_nanny_port_range 2022-08-26 14:07:25,093 - distributed.scheduler - INFO - State start
-2022-08-26 14:07:25,095 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:07:25,095 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:38491
-2022-08-26 14:07:25,095 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:34141
-2022-08-26 14:07:25,098 - distributed.scheduler - INFO - Receive client connection: Client-158e78d0-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:25,098 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:25,102 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:9867'
-2022-08-26 14:07:25,716 - distributed.worker - INFO -       Start worker at:       tcp://127.0.0.1:9869
-2022-08-26 14:07:25,716 - distributed.worker - INFO -          Listening to:       tcp://127.0.0.1:9869
-2022-08-26 14:07:25,716 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33101
-2022-08-26 14:07:25,716 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:38491
-2022-08-26 14:07:25,716 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:25,716 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:07:25,716 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:07:25,716 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-spay85sc
-2022-08-26 14:07:25,716 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:25,981 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:9869', status: init, memory: 0, processing: 0>
-2022-08-26 14:07:25,981 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:9869
-2022-08-26 14:07:25,981 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:25,981 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:38491
-2022-08-26 14:07:25,982 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:25,982 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:25,990 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:9868'
-2022-08-26 14:07:26,600 - distributed.worker - INFO -       Start worker at:       tcp://127.0.0.1:9870
-2022-08-26 14:07:26,600 - distributed.worker - INFO -          Listening to:       tcp://127.0.0.1:9870
-2022-08-26 14:07:26,600 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35559
-2022-08-26 14:07:26,600 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:38491
-2022-08-26 14:07:26,600 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:26,600 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:07:26,600 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:07:26,600 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-zn6ee20o
-2022-08-26 14:07:26,600 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:26,850 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:9870', status: init, memory: 0, processing: 0>
-2022-08-26 14:07:26,850 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:9870
-2022-08-26 14:07:26,850 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:26,850 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:38491
-2022-08-26 14:07:26,850 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:26,851 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:26,878 - distributed.nanny - INFO - Closing Nanny at 'not-running'.
-2022-08-26 14:07:26,881 - distributed.worker - INFO - Run out-of-band function 'get_worker_port'
-2022-08-26 14:07:26,881 - distributed.worker - INFO - Run out-of-band function 'get_worker_port'
-2022-08-26 14:07:26,882 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:9868'.
-2022-08-26 14:07:26,882 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:07:26,882 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:9870
-2022-08-26 14:07:26,883 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-b1c08c5d-e6ec-4160-8678-85af5443f8da Address tcp://127.0.0.1:9870 Status: Status.closing
-2022-08-26 14:07:26,883 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:9870', status: closing, memory: 0, processing: 0>
-2022-08-26 14:07:26,883 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:9870
-2022-08-26 14:07:27,013 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:9867'.
-2022-08-26 14:07:27,013 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:07:27,013 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:9869
-2022-08-26 14:07:27,014 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-128e31a2-9754-417b-b27d-3017fb1a95f9 Address tcp://127.0.0.1:9869 Status: Status.closing
-2022-08-26 14:07:27,015 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:9869', status: closing, memory: 0, processing: 0>
-2022-08-26 14:07:27,015 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:9869
-2022-08-26 14:07:27,015 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:07:27,145 - distributed.scheduler - INFO - Remove client Client-158e78d0-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:27,145 - distributed.scheduler - INFO - Remove client Client-158e78d0-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:27,146 - distributed.scheduler - INFO - Close client connection: Client-158e78d0-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:27,146 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:07:27,146 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:07:27,346 - distributed.utils_perf - WARNING - full garbage collections took 81% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_nanny.py::test_nanny_closed_by_keyboard_interrupt[tcp] SKIPPED
-distributed/tests/test_nanny.py::test_nanny_closed_by_keyboard_interrupt[ucx] SKIPPED
-distributed/tests/test_nanny.py::test_worker_start_exception 2022-08-26 14:07:27,358 - distributed.scheduler - INFO - State start
-2022-08-26 14:07:27,360 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:07:27,360 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:46709
-2022-08-26 14:07:27,360 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:44609
-2022-08-26 14:07:28,209 - distributed.worker - INFO - Stopping worker
-2022-08-26 14:07:28,209 - distributed.worker - INFO - Closed worker has not yet started: Status.init
-2022-08-26 14:07:28,210 - distributed.nanny - ERROR - Failed to start worker
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 481, in start
-    await asyncio.wait_for(self.start_unsafe(), timeout=timeout)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/tasks.py", line 408, in wait_for
-    return await fut
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/tests/test_nanny.py", line 500, in start_unsafe
-    raise ValueError("broken")
-ValueError: broken
-
-The above exception was the direct cause of the following exception:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/nanny.py", line 892, in run
-    await worker
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 489, in start
-    raise RuntimeError(f"{type(self).__name__} failed to start.") from exc
-RuntimeError: BrokenWorker failed to start.
-2022-08-26 14:07:28,249 - distributed.nanny - ERROR - Failed to start process
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 481, in start
-    await asyncio.wait_for(self.start_unsafe(), timeout=timeout)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/tasks.py", line 408, in wait_for
-    return await fut
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/tests/test_nanny.py", line 500, in start_unsafe
-    raise ValueError("broken")
-ValueError: broken
-
-The above exception was the direct cause of the following exception:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/nanny.py", line 438, in instantiate
-    result = await self.process.start()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/nanny.py", line 695, in start
-    msg = await self._wait_until_connected(uid)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/nanny.py", line 823, in _wait_until_connected
-    raise msg["exception"]
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/nanny.py", line 892, in run
-    await worker
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 489, in start
-    raise RuntimeError(f"{type(self).__name__} failed to start.") from exc
-RuntimeError: BrokenWorker failed to start.
-2022-08-26 14:07:28,254 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:07:28,254 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:07:28,453 - distributed.utils_perf - WARNING - full garbage collections took 82% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_nanny.py::test_failure_during_worker_initialization 2022-08-26 14:07:28,458 - distributed.scheduler - INFO - State start
-2022-08-26 14:07:28,460 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:07:28,460 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:40599
-2022-08-26 14:07:28,460 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:45393
-2022-08-26 14:07:29,066 - distributed.nanny - ERROR - Failed to initialize Worker
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/nanny.py", line 853, in _run
-    worker = Worker(**worker_kwargs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 729, in __init__
-    ServerNode.__init__(
-TypeError: Server.__init__() got an unexpected keyword argument 'foo'
-2022-08-26 14:07:29,096 - distributed.nanny - ERROR - Failed to start process
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/nanny.py", line 438, in instantiate
-    result = await self.process.start()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/nanny.py", line 695, in start
-    msg = await self._wait_until_connected(uid)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/nanny.py", line 823, in _wait_until_connected
-    raise msg["exception"]
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/nanny.py", line 853, in _run
-    worker = Worker(**worker_kwargs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 729, in __init__
-    ServerNode.__init__(
-TypeError: Server.__init__() got an unexpected keyword argument 'foo'
-2022-08-26 14:07:29,100 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:07:29,100 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:07:29,298 - distributed.utils_perf - WARNING - full garbage collections took 81% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_nanny.py::test_environ_plugin 2022-08-26 14:07:29,304 - distributed.scheduler - INFO - State start
-2022-08-26 14:07:29,305 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:07:29,305 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:35929
-2022-08-26 14:07:29,305 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:42699
-2022-08-26 14:07:29,311 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:36357'
-2022-08-26 14:07:29,311 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:35319'
-2022-08-26 14:07:29,927 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-ff_5idvi', purging
-2022-08-26 14:07:29,932 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35577
-2022-08-26 14:07:29,932 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35577
-2022-08-26 14:07:29,932 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:07:29,932 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43529
-2022-08-26 14:07:29,932 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35929
-2022-08-26 14:07:29,932 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:29,932 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:07:29,932 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:07:29,932 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-88yjaxag
-2022-08-26 14:07:29,932 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:29,944 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33335
-2022-08-26 14:07:29,944 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33335
-2022-08-26 14:07:29,944 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:07:29,944 - distributed.worker - INFO -          dashboard at:            127.0.0.1:37379
-2022-08-26 14:07:29,945 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35929
-2022-08-26 14:07:29,945 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:29,945 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:07:29,945 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:07:29,945 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-imv552nm
-2022-08-26 14:07:29,945 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:30,196 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33335', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:07:30,196 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33335
-2022-08-26 14:07:30,196 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:30,196 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35929
-2022-08-26 14:07:30,196 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:30,197 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:30,201 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:35577', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:07:30,201 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:35577
-2022-08-26 14:07:30,201 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:30,202 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35929
-2022-08-26 14:07:30,202 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:30,202 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:30,253 - distributed.scheduler - INFO - Receive client connection: Client-18a11af8-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:30,254 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:30,257 - distributed.nanny - INFO - Starting Nanny plugin Environ-a287d7f8-17db-435d-a76f-1abb3e1c8c7c
-2022-08-26 14:07:30,257 - distributed.nanny - INFO - Starting Nanny plugin Environ-a287d7f8-17db-435d-a76f-1abb3e1c8c7c
-2022-08-26 14:07:30,257 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:07:30,257 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:07:30,258 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:33335
-2022-08-26 14:07:30,258 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:35577
-2022-08-26 14:07:30,258 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-d4c2f30b-e44c-4fcb-b440-33fb95ec3f94 Address tcp://127.0.0.1:33335 Status: Status.closing
-2022-08-26 14:07:30,259 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:33335', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:07:30,259 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:33335
-2022-08-26 14:07:30,259 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-91382e2b-6794-48c0-9419-5a10ae56d899 Address tcp://127.0.0.1:35577 Status: Status.closing
-2022-08-26 14:07:30,259 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:35577', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:07:30,259 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:35577
-2022-08-26 14:07:30,259 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:07:30,393 - distributed.nanny - WARNING - Restarting worker
-2022-08-26 14:07:30,394 - distributed.nanny - WARNING - Restarting worker
-2022-08-26 14:07:31,007 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42161
-2022-08-26 14:07:31,008 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42161
-2022-08-26 14:07:31,008 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:07:31,008 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40607
-2022-08-26 14:07:31,008 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35929
-2022-08-26 14:07:31,008 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:31,008 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:07:31,008 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:07:31,008 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-uysxum_4
-2022-08-26 14:07:31,008 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:31,014 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45641
-2022-08-26 14:07:31,014 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45641
-2022-08-26 14:07:31,014 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:07:31,014 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35861
-2022-08-26 14:07:31,014 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35929
-2022-08-26 14:07:31,014 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:31,014 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:07:31,014 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:07:31,014 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-6j2gej3x
-2022-08-26 14:07:31,014 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:31,262 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45641', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:07:31,263 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45641
-2022-08-26 14:07:31,263 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:31,263 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35929
-2022-08-26 14:07:31,263 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:31,264 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:31,277 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42161', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:07:31,277 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42161
-2022-08-26 14:07:31,277 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:31,277 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35929
-2022-08-26 14:07:31,278 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:31,278 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:31,288 - distributed.nanny - INFO - Starting Nanny plugin Environ-a287d7f8-17db-435d-a76f-1abb3e1c8c7c
-2022-08-26 14:07:31,288 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:37759'
-2022-08-26 14:07:31,899 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39747
-2022-08-26 14:07:31,899 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39747
-2022-08-26 14:07:31,899 - distributed.worker - INFO -           Worker name:                        new
-2022-08-26 14:07:31,899 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40501
-2022-08-26 14:07:31,899 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35929
-2022-08-26 14:07:31,899 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:31,899 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:07:31,899 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:07:31,899 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-6yz52qo_
-2022-08-26 14:07:31,899 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:32,142 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39747', name: new, status: init, memory: 0, processing: 0>
-2022-08-26 14:07:32,143 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39747
-2022-08-26 14:07:32,143 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:32,143 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35929
-2022-08-26 14:07:32,143 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:32,144 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:32,181 - distributed.worker - INFO - Run out-of-band function 'getenv'
-2022-08-26 14:07:32,181 - distributed.worker - INFO - Run out-of-band function 'getenv'
-2022-08-26 14:07:32,181 - distributed.worker - INFO - Run out-of-band function 'getenv'
-2022-08-26 14:07:32,182 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:37759'.
-2022-08-26 14:07:32,182 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:07:32,182 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:39747
-2022-08-26 14:07:32,183 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-96e26a6c-ba46-448d-afcd-b38c8f196669 Address tcp://127.0.0.1:39747 Status: Status.closing
-2022-08-26 14:07:32,183 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:39747', name: new, status: closing, memory: 0, processing: 0>
-2022-08-26 14:07:32,183 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:39747
-2022-08-26 14:07:32,314 - distributed.scheduler - INFO - Remove client Client-18a11af8-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:32,315 - distributed.scheduler - INFO - Remove client Client-18a11af8-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:32,315 - distributed.scheduler - INFO - Close client connection: Client-18a11af8-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:32,315 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:36357'.
-2022-08-26 14:07:32,315 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:07:32,316 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:35319'.
-2022-08-26 14:07:32,316 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:07:32,316 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:45641
-2022-08-26 14:07:32,316 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42161
-2022-08-26 14:07:32,317 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-08ba8942-3cdb-4788-8ae5-614c53f9d267 Address tcp://127.0.0.1:45641 Status: Status.closing
-2022-08-26 14:07:32,317 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-35bedf05-c524-43cc-af8d-6c4780114d11 Address tcp://127.0.0.1:42161 Status: Status.closing
-2022-08-26 14:07:32,317 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:45641', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:07:32,317 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:45641
-2022-08-26 14:07:32,317 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:42161', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:07:32,317 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:42161
-2022-08-26 14:07:32,317 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:07:32,458 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:07:32,458 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:07:32,658 - distributed.utils_perf - WARNING - full garbage collections took 82% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_nanny.py::test_no_unnecessary_imports_on_worker[scipy] 2022-08-26 14:07:32,664 - distributed.scheduler - INFO - State start
-2022-08-26 14:07:32,665 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:07:32,665 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:41927
-2022-08-26 14:07:32,665 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:41109
-2022-08-26 14:07:32,669 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:44047'
-2022-08-26 14:07:33,281 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34893
-2022-08-26 14:07:33,281 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34893
-2022-08-26 14:07:33,281 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:07:33,281 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36127
-2022-08-26 14:07:33,281 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:41927
-2022-08-26 14:07:33,281 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:33,281 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:07:33,281 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:07:33,281 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-24kzp11l
-2022-08-26 14:07:33,281 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:33,548 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:34893', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:07:33,549 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:34893
-2022-08-26 14:07:33,549 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:33,549 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:41927
-2022-08-26 14:07:33,549 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:33,549 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:33,557 - distributed.scheduler - INFO - Receive client connection: Client-1a993cc8-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:33,557 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:33,612 - distributed.worker - INFO - Run out-of-band function 'assert_no_import'
-2022-08-26 14:07:33,613 - distributed.scheduler - INFO - Remove client Client-1a993cc8-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:33,613 - distributed.scheduler - INFO - Remove client Client-1a993cc8-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:33,613 - distributed.scheduler - INFO - Close client connection: Client-1a993cc8-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:33,614 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:44047'.
-2022-08-26 14:07:33,614 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:07:33,614 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:34893
-2022-08-26 14:07:33,615 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-f99d65fb-35ae-44dd-b2b4-9e81fc6de3ad Address tcp://127.0.0.1:34893 Status: Status.closing
-2022-08-26 14:07:33,615 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:34893', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:07:33,615 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:34893
-2022-08-26 14:07:33,615 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:07:33,756 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:07:33,756 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:07:33,955 - distributed.utils_perf - WARNING - full garbage collections took 82% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_nanny.py::test_no_unnecessary_imports_on_worker[pandas] 2022-08-26 14:07:33,961 - distributed.scheduler - INFO - State start
-2022-08-26 14:07:33,962 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:07:33,962 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:33013
-2022-08-26 14:07:33,962 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:35361
-2022-08-26 14:07:33,965 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:38025'
-2022-08-26 14:07:34,578 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:44505
-2022-08-26 14:07:34,578 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:44505
-2022-08-26 14:07:34,578 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:07:34,578 - distributed.worker - INFO -          dashboard at:            127.0.0.1:44821
-2022-08-26 14:07:34,578 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33013
-2022-08-26 14:07:34,578 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:34,578 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:07:34,578 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:07:34,579 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-miqfxpaa
-2022-08-26 14:07:34,579 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:34,844 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:44505', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:07:34,844 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:44505
-2022-08-26 14:07:34,844 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:34,844 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33013
-2022-08-26 14:07:34,845 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:34,845 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:34,854 - distributed.scheduler - INFO - Receive client connection: Client-1b5f151c-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:34,854 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:34,908 - distributed.worker - INFO - Run out-of-band function 'assert_no_import'
-2022-08-26 14:07:34,908 - distributed.worker - WARNING - Run Failed
-Function: assert_no_import
-args:     ()
-kwargs:   {'dask_worker': <Worker 'tcp://127.0.0.1:44505', name: 0, status: running, stored: 0, running: 0/1, ready: 0, comm: 0, waiting: 0>}
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 3068, in run
-    result = function(*args, **kwargs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/tests/test_nanny.py", line 565, in assert_no_import
-    assert modname not in sys.modules
-AssertionError: assert 'pandas' not in {'__future__': <module '__future__' from '/home/matthew/pkgsrc/install.20220728/lib/python3.10/__future__.py'>, '__mai...<module '__main__' (built-in)>, '__mp_main__': <module '__main__' (built-in)>, '_abc': <module '_abc' (built-in)>, ...}
- +  where {'__future__': <module '__future__' from '/home/matthew/pkgsrc/install.20220728/lib/python3.10/__future__.py'>, '__mai...<module '__main__' (built-in)>, '__mp_main__': <module '__main__' (built-in)>, '_abc': <module '_abc' (built-in)>, ...} = sys.modules
-2022-08-26 14:07:35,053 - distributed.scheduler - INFO - Remove client Client-1b5f151c-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:35,054 - distributed.scheduler - INFO - Remove client Client-1b5f151c-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:35,054 - distributed.scheduler - INFO - Close client connection: Client-1b5f151c-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:35,054 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:38025'.
-2022-08-26 14:07:35,054 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:07:35,055 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:44505
-2022-08-26 14:07:35,055 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-0bc9ebd3-7d93-4b45-b4a3-b301aa8b27e6 Address tcp://127.0.0.1:44505 Status: Status.closing
-2022-08-26 14:07:35,055 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:44505', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:07:35,055 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:44505
-2022-08-26 14:07:35,056 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:07:35,198 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:07:35,198 - distributed.scheduler - INFO - Scheduler closing all comms
-Dumped cluster state to test_cluster_dump/test_no_unnecessary_imports_on_worker.yaml
-XFAIL
-distributed/tests/test_nanny.py::test_repeated_restarts SKIPPED (nee...)
-distributed/tests/test_nanny.py::test_restart_memory SKIPPED (need -...)
-distributed/tests/test_nanny.py::test_close_joins SKIPPED (need --ru...)
-distributed/tests/test_nanny.py::test_scheduler_crash_doesnt_restart 2022-08-26 14:07:35,288 - distributed.scheduler - INFO - State start
-2022-08-26 14:07:35,290 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:07:35,290 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:41265
-2022-08-26 14:07:35,290 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:36577
-2022-08-26 14:07:35,293 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:43393'
-2022-08-26 14:07:35,911 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37161
-2022-08-26 14:07:35,911 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37161
-2022-08-26 14:07:35,911 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:07:35,911 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33859
-2022-08-26 14:07:35,911 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:41265
-2022-08-26 14:07:35,911 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:35,911 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:07:35,911 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:07:35,911 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-j7zfvy5u
-2022-08-26 14:07:35,911 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:36,176 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37161', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:07:36,177 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37161
-2022-08-26 14:07:36,177 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:36,177 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:41265
-2022-08-26 14:07:36,177 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:36,177 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:36,178 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-663e00a8-2817-41d9-8688-d4bc34d295ab Address tcp://127.0.0.1:37161 Status: Status.running
-2022-08-26 14:07:36,178 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:37161', name: 0, status: running, memory: 0, processing: 0>
-2022-08-26 14:07:36,178 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:37161
-2022-08-26 14:07:36,178 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:37161
-2022-08-26 14:07:36,178 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:07:36,178 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:07:36,179 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:07:36,181 - distributed.nanny - INFO - Worker closed
-2022-08-26 14:07:36,181 - distributed.nanny - ERROR - Worker process died unexpectedly
-2022-08-26 14:07:36,308 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:43393'.
-2022-08-26 14:07:36,508 - distributed.utils_perf - WARNING - full garbage collections took 81% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_nanny.py::test_malloc_trim_threshold SKIPPED
-distributed/tests/test_parse_stdout.py::test_parse_rows PASSED
-distributed/tests/test_parse_stdout.py::test_build_xml PASSED
-distributed/tests/test_preload.py::test_worker_preload_file 2022-08-26 14:07:37,364 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:07:37,366 - distributed.scheduler - INFO - State start
-2022-08-26 14:07:37,369 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:07:37,369 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:42317
-2022-08-26 14:07:37,369 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:07:37,384 - distributed.preloading - INFO - Creating preload: /tmp/tmpr1zofhl_/worker_info.py
-2022-08-26 14:07:37,385 - distributed.utils - INFO - Reload module worker_info from .py file
-2022-08-26 14:07:37,385 - distributed.preloading - INFO - Import preload module: /tmp/tmpr1zofhl_/worker_info.py
-2022-08-26 14:07:37,412 - distributed.preloading - INFO - Run preload setup: /tmp/tmpr1zofhl_/worker_info.py
-2022-08-26 14:07:37,412 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:41767
-2022-08-26 14:07:37,412 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:41767
-2022-08-26 14:07:37,412 - distributed.worker - INFO -          dashboard at:            127.0.0.1:44513
-2022-08-26 14:07:37,412 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:42317
-2022-08-26 14:07:37,412 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:37,412 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:07:37,412 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:07:37,412 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-47cp0s7h
-2022-08-26 14:07:37,412 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:37,418 - distributed.preloading - INFO - Creating preload: /tmp/tmpr1zofhl_/worker_info.py
-2022-08-26 14:07:37,418 - distributed.utils - INFO - Reload module worker_info from .py file
-2022-08-26 14:07:37,419 - distributed.preloading - INFO - Import preload module: /tmp/tmpr1zofhl_/worker_info.py
-2022-08-26 14:07:37,445 - distributed.preloading - INFO - Run preload setup: /tmp/tmpr1zofhl_/worker_info.py
-2022-08-26 14:07:37,446 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:41277
-2022-08-26 14:07:37,446 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:41277
-2022-08-26 14:07:37,446 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43867
-2022-08-26 14:07:37,446 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:42317
-2022-08-26 14:07:37,446 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:37,446 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:07:37,446 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:07:37,446 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-_b_iljiq
-2022-08-26 14:07:37,446 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:37,694 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:41767', status: init, memory: 0, processing: 0>
-2022-08-26 14:07:37,953 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:41767
-2022-08-26 14:07:37,953 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:37,953 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:42317
-2022-08-26 14:07:37,954 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:37,954 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:41277', status: init, memory: 0, processing: 0>
-2022-08-26 14:07:37,954 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:41277
-2022-08-26 14:07:37,954 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:37,954 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:37,954 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:42317
-2022-08-26 14:07:37,955 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:37,955 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:37,960 - distributed.scheduler - INFO - Receive client connection: Client-1d39150b-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:37,960 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:37,963 - distributed.worker - INFO - Run out-of-band function 'check_worker'
-2022-08-26 14:07:37,963 - distributed.worker - INFO - Run out-of-band function 'check_worker'
-2022-08-26 14:07:37,972 - distributed.scheduler - INFO - Remove client Client-1d39150b-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:37,972 - distributed.scheduler - INFO - Remove client Client-1d39150b-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:37,972 - distributed.scheduler - INFO - Close client connection: Client-1d39150b-2583-11ed-a99d-00d861bc4509
-PASSED
-distributed/tests/test_preload.py::test_worker_preload_text 2022-08-26 14:07:37,984 - distributed.preloading - INFO - Creating preload: 
-def dask_setup(worker):
-    worker.foo = 'setup'
-
-2022-08-26 14:07:37,985 - distributed.utils - INFO - Reload module tmpa0fddzhn from .py file
-2022-08-26 14:07:38,008 - distributed.preloading - INFO - Import preload module: /tmp/tmpa0fddzhn.py
-2022-08-26 14:07:38,028 - distributed.scheduler - INFO - State start
-2022-08-26 14:07:38,030 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:07:38,030 - distributed.scheduler - INFO -   Scheduler at: tcp://192.168.1.159:42997
-2022-08-26 14:07:38,030 - distributed.scheduler - INFO -   dashboard at:                    :33973
-2022-08-26 14:07:38,030 - distributed.preloading - INFO - Run preload setup: 
-def dask_setup(worker):
-    worker.foo = 'setup'
-
-2022-08-26 14:07:38,031 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-47cp0s7h', purging
-2022-08-26 14:07:38,031 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-_b_iljiq', purging
-2022-08-26 14:07:38,031 - distributed.preloading - INFO - Creating preload: 
-def dask_setup(worker):
-    worker.foo = 'setup'
-
-2022-08-26 14:07:38,032 - distributed.utils - INFO - Reload module tmpflzpqwdx from .py file
-2022-08-26 14:07:38,033 - distributed.preloading - INFO - Import preload module: /tmp/tmpflzpqwdx.py
-2022-08-26 14:07:38,035 - distributed.preloading - INFO - Run preload setup: 
-def dask_setup(worker):
-    worker.foo = 'setup'
-
-2022-08-26 14:07:38,035 - distributed.worker - INFO -       Start worker at:  tcp://192.168.1.159:36151
-2022-08-26 14:07:38,035 - distributed.worker - INFO -          Listening to:  tcp://192.168.1.159:36151
-2022-08-26 14:07:38,035 - distributed.worker - INFO -          dashboard at:        192.168.1.159:39085
-2022-08-26 14:07:38,035 - distributed.worker - INFO - Waiting to connect to:  tcp://192.168.1.159:42997
-2022-08-26 14:07:38,035 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:38,035 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:07:38,035 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:07:38,035 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-1k0qfasx
-2022-08-26 14:07:38,035 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:38,038 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://192.168.1.159:36151', status: init, memory: 0, processing: 0>
-2022-08-26 14:07:38,038 - distributed.scheduler - INFO - Starting worker compute stream, tcp://192.168.1.159:36151
-2022-08-26 14:07:38,038 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:38,038 - distributed.worker - INFO -         Registered to:  tcp://192.168.1.159:42997
-2022-08-26 14:07:38,039 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:38,039 - distributed.worker - INFO - Stopping worker at tcp://192.168.1.159:36151
-2022-08-26 14:07:38,039 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:38,039 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-91ba89ae-96d4-4c0a-9a84-dc64d354badd Address tcp://192.168.1.159:36151 Status: Status.closing
-2022-08-26 14:07:38,040 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://192.168.1.159:36151', status: closing, memory: 0, processing: 0>
-2022-08-26 14:07:38,040 - distributed.core - INFO - Removing comms to tcp://192.168.1.159:36151
-2022-08-26 14:07:38,040 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:07:38,041 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:07:38,041 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_preload.py::test_worker_preload_config 2022-08-26 14:07:38,046 - distributed.scheduler - INFO - State start
-2022-08-26 14:07:38,048 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:07:38,048 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:42619
-2022-08-26 14:07:38,048 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:39569
-2022-08-26 14:07:38,048 - distributed.preloading - INFO - Creating preload: 
-def dask_setup(worker):
-    worker.foo = 'setup'
-
-def dask_teardown(worker):
-    worker.foo = 'teardown'
-
-2022-08-26 14:07:38,049 - distributed.utils - INFO - Reload module tmp3f3qgovy from .py file
-2022-08-26 14:07:38,050 - distributed.preloading - INFO - Import preload module: /tmp/tmp3f3qgovy.py
-2022-08-26 14:07:38,051 - distributed.preloading - INFO - Run preload setup: 
-def dask_setup(worker):
-    worker.foo = 'setup'
-
-def dask_teardown(worker):
-    worker.foo = 'teardown'
-
-2022-08-26 14:07:38,052 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:37361'
-2022-08-26 14:07:38,659 - distributed.preloading - INFO - Creating preload: 
-def dask_setup(worker):
-    worker.foo = 'setup'
-
-def dask_teardown(worker):
-    worker.foo = 'teardown'
-
-2022-08-26 14:07:38,660 - distributed.utils - INFO - Reload module tmptqlqean2 from .py file
-2022-08-26 14:07:38,660 - distributed.preloading - INFO - Import preload module: /tmp/tmptqlqean2.py
-2022-08-26 14:07:38,686 - distributed.preloading - INFO - Run preload setup: 
-def dask_setup(worker):
-    worker.foo = 'setup'
-
-def dask_teardown(worker):
-    worker.foo = 'teardown'
-
-2022-08-26 14:07:38,686 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:41125
-2022-08-26 14:07:38,686 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:41125
-2022-08-26 14:07:38,686 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46205
-2022-08-26 14:07:38,686 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:42619
-2022-08-26 14:07:38,686 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:38,686 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:07:38,686 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:07:38,686 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-zna55875
-2022-08-26 14:07:38,686 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:38,948 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:41125', status: init, memory: 0, processing: 0>
-2022-08-26 14:07:38,948 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:41125
-2022-08-26 14:07:38,948 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:38,948 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:42619
-2022-08-26 14:07:38,949 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:38,949 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:38,991 - distributed.scheduler - INFO - Receive client connection: Client-1dd656f1-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:38,991 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:38,993 - distributed.worker - INFO - Run out-of-band function 'lambda'
-2022-08-26 14:07:39,002 - distributed.scheduler - INFO - Remove client Client-1dd656f1-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:39,002 - distributed.scheduler - INFO - Remove client Client-1dd656f1-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:39,003 - distributed.scheduler - INFO - Close client connection: Client-1dd656f1-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:39,003 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:37361'.
-2022-08-26 14:07:39,003 - distributed.preloading - INFO - Run preload teardown: 
-def dask_setup(worker):
-    worker.foo = 'setup'
-
-def dask_teardown(worker):
-    worker.foo = 'teardown'
-
-2022-08-26 14:07:39,003 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:07:39,004 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:41125
-2022-08-26 14:07:39,004 - distributed.preloading - INFO - Run preload teardown: 
-def dask_setup(worker):
-    worker.foo = 'setup'
-
-def dask_teardown(worker):
-    worker.foo = 'teardown'
-
-2022-08-26 14:07:39,004 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-68c1b5a9-77de-4c74-98de-8704fc639fba Address tcp://127.0.0.1:41125 Status: Status.closing
-2022-08-26 14:07:39,005 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:41125', status: closing, memory: 0, processing: 0>
-2022-08-26 14:07:39,005 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:41125
-2022-08-26 14:07:39,005 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:07:39,134 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:07:39,135 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:07:39,335 - distributed.utils_perf - WARNING - full garbage collections took 80% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_preload.py::test_worker_preload_module 2022-08-26 14:07:40,201 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:07:40,204 - distributed.scheduler - INFO - State start
-2022-08-26 14:07:40,207 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:07:40,207 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:39117
-2022-08-26 14:07:40,207 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:07:40,209 - distributed.preloading - INFO - Creating preload: worker_info
-2022-08-26 14:07:40,209 - distributed.preloading - INFO - Import preload module: worker_info
-2022-08-26 14:07:40,215 - distributed.preloading - INFO - Run preload setup: worker_info
-2022-08-26 14:07:40,215 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:41493
-2022-08-26 14:07:40,215 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:41493
-2022-08-26 14:07:40,215 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33559
-2022-08-26 14:07:40,215 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39117
-2022-08-26 14:07:40,216 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:40,216 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:07:40,216 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:07:40,216 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-gdgd5u03
-2022-08-26 14:07:40,216 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:40,244 - distributed.preloading - INFO - Creating preload: worker_info
-2022-08-26 14:07:40,244 - distributed.preloading - INFO - Import preload module: worker_info
-2022-08-26 14:07:40,250 - distributed.preloading - INFO - Run preload setup: worker_info
-2022-08-26 14:07:40,250 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34553
-2022-08-26 14:07:40,250 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34553
-2022-08-26 14:07:40,250 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34041
-2022-08-26 14:07:40,250 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39117
-2022-08-26 14:07:40,250 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:40,251 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:07:40,251 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:07:40,251 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-a317ibfh
-2022-08-26 14:07:40,251 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:40,494 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:41493', status: init, memory: 0, processing: 0>
-2022-08-26 14:07:40,750 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:41493
-2022-08-26 14:07:40,751 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:40,751 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39117
-2022-08-26 14:07:40,751 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:40,751 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:34553', status: init, memory: 0, processing: 0>
-2022-08-26 14:07:40,752 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:34553
-2022-08-26 14:07:40,752 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:40,752 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:40,752 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39117
-2022-08-26 14:07:40,752 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:40,753 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:40,757 - distributed.scheduler - INFO - Receive client connection: Client-1ee3e722-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:40,758 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:40,761 - distributed.worker - INFO - Run out-of-band function 'check_worker'
-2022-08-26 14:07:40,761 - distributed.worker - INFO - Run out-of-band function 'check_worker'
-2022-08-26 14:07:40,769 - distributed.scheduler - INFO - Remove client Client-1ee3e722-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:40,769 - distributed.scheduler - INFO - Remove client Client-1ee3e722-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:40,769 - distributed.scheduler - INFO - Close client connection: Client-1ee3e722-2583-11ed-a99d-00d861bc4509
-PASSED
-distributed/tests/test_preload.py::test_worker_preload_click 2022-08-26 14:07:40,783 - distributed.scheduler - INFO - State start
-2022-08-26 14:07:40,785 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:07:40,785 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:45351
-2022-08-26 14:07:40,785 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:33517
-2022-08-26 14:07:40,786 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-gdgd5u03', purging
-2022-08-26 14:07:40,786 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-a317ibfh', purging
-2022-08-26 14:07:40,786 - distributed.preloading - INFO - Creating preload: 
-import click
-
-@click.command()
-def dask_setup(worker):
-    worker.foo = 'setup'
-
-2022-08-26 14:07:40,787 - distributed.utils - INFO - Reload module tmpbreqpr8j from .py file
-2022-08-26 14:07:40,787 - distributed.preloading - INFO - Import preload module: /tmp/tmpbreqpr8j.py
-2022-08-26 14:07:40,789 - distributed.preloading - INFO - Run preload setup: 
-import click
-
-@click.command()
-def dask_setup(worker):
-    worker.foo = 'setup'
-
-2022-08-26 14:07:40,790 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37627
-2022-08-26 14:07:40,790 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37627
-2022-08-26 14:07:40,790 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39477
-2022-08-26 14:07:40,790 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:45351
-2022-08-26 14:07:40,790 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:40,790 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:07:40,790 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:07:40,790 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-mur9clp1
-2022-08-26 14:07:40,790 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:40,792 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37627', status: init, memory: 0, processing: 0>
-2022-08-26 14:07:40,792 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37627
-2022-08-26 14:07:40,792 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:40,792 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:45351
-2022-08-26 14:07:40,793 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:40,793 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:37627
-2022-08-26 14:07:40,793 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:40,793 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-6638626c-482d-4e30-9e2a-14b0295c78bf Address tcp://127.0.0.1:37627 Status: Status.closing
-2022-08-26 14:07:40,794 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:37627', status: closing, memory: 0, processing: 0>
-2022-08-26 14:07:40,794 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:37627
-2022-08-26 14:07:40,794 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:07:40,795 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:07:40,795 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:07:40,994 - distributed.utils_perf - WARNING - full garbage collections took 80% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_preload.py::test_worker_preload_click_async 2022-08-26 14:07:41,000 - distributed.scheduler - INFO - State start
-2022-08-26 14:07:41,002 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:07:41,002 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:39641
-2022-08-26 14:07:41,002 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:39749
-2022-08-26 14:07:41,003 - distributed.preloading - INFO - Creating preload: 
-import click
-
-@click.command()
-async def dask_setup(worker):
-    worker.foo = 'setup'
-
-2022-08-26 14:07:41,004 - distributed.utils - INFO - Reload module tmpg0rrrrvf from .py file
-2022-08-26 14:07:41,004 - distributed.preloading - INFO - Import preload module: /tmp/tmpg0rrrrvf.py
-2022-08-26 14:07:41,006 - distributed.preloading - INFO - Run preload setup: 
-import click
-
-@click.command()
-async def dask_setup(worker):
-    worker.foo = 'setup'
-
-2022-08-26 14:07:41,006 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33825
-2022-08-26 14:07:41,006 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33825
-2022-08-26 14:07:41,006 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43859
-2022-08-26 14:07:41,007 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39641
-2022-08-26 14:07:41,007 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:41,007 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:07:41,007 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:07:41,007 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ij2a9stx
-2022-08-26 14:07:41,007 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:41,009 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33825', status: init, memory: 0, processing: 0>
-2022-08-26 14:07:41,009 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33825
-2022-08-26 14:07:41,009 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:41,009 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39641
-2022-08-26 14:07:41,009 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:41,009 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:33825
-2022-08-26 14:07:41,010 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:41,010 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-7ed3c573-be9a-4503-8509-35d2ede1c1d6 Address tcp://127.0.0.1:33825 Status: Status.closing
-2022-08-26 14:07:41,011 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:33825', status: closing, memory: 0, processing: 0>
-2022-08-26 14:07:41,011 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:33825
-2022-08-26 14:07:41,011 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:07:41,011 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:07:41,012 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:07:41,210 - distributed.utils_perf - WARNING - full garbage collections took 80% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_preload.py::test_preload_import_time 2022-08-26 14:07:41,216 - distributed.preloading - INFO - Creating preload: from distributed.comm.registry import backends
-from distributed.comm.tcp import TCPBackend
-
-backends["foo"] = TCPBackend()
-2022-08-26 14:07:41,216 - distributed.utils - INFO - Reload module tmpmbfi74ya from .py file
-2022-08-26 14:07:41,239 - distributed.preloading - INFO - Import preload module: /tmp/tmpmbfi74ya.py
-2022-08-26 14:07:41,260 - distributed.scheduler - INFO - State start
-2022-08-26 14:07:41,261 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:07:41,262 - distributed.scheduler - INFO -   Scheduler at: tcp://192.168.1.159:46763
-2022-08-26 14:07:41,262 - distributed.scheduler - INFO -   dashboard at:                    :46717
-2022-08-26 14:07:41,265 - distributed.nanny - INFO -         Start Nanny at: 'tcp://192.168.1.159:45645'
-2022-08-26 14:07:41,873 - distributed.preloading - INFO - Creating preload: from distributed.comm.registry import backends
-from distributed.comm.tcp import TCPBackend
-
-backends["foo"] = TCPBackend()
-2022-08-26 14:07:41,874 - distributed.utils - INFO - Reload module tmpvfwl_d3y from .py file
-2022-08-26 14:07:41,875 - distributed.preloading - INFO - Import preload module: /tmp/tmpvfwl_d3y.py
-2022-08-26 14:07:41,900 - distributed.worker - INFO -       Start worker at:  tcp://192.168.1.159:34119
-2022-08-26 14:07:41,900 - distributed.worker - INFO -          Listening to:  tcp://192.168.1.159:34119
-2022-08-26 14:07:41,900 - distributed.worker - INFO -          dashboard at:        192.168.1.159:35355
-2022-08-26 14:07:41,901 - distributed.worker - INFO - Waiting to connect to:  tcp://192.168.1.159:46763
-2022-08-26 14:07:41,901 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:41,901 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:07:41,901 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:07:41,901 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-mhj36rm3
-2022-08-26 14:07:41,901 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:42,168 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://192.168.1.159:34119', status: init, memory: 0, processing: 0>
-2022-08-26 14:07:42,168 - distributed.scheduler - INFO - Starting worker compute stream, tcp://192.168.1.159:34119
-2022-08-26 14:07:42,168 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:42,168 - distributed.worker - INFO -         Registered to:  tcp://192.168.1.159:46763
-2022-08-26 14:07:42,168 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:42,169 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:42,204 - distributed.scheduler - INFO - Receive client connection: Client-1fc0b043-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:42,204 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:42,217 - distributed.scheduler - INFO - Remove client Client-1fc0b043-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:42,217 - distributed.scheduler - INFO - Remove client Client-1fc0b043-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:42,218 - distributed.scheduler - INFO - Close client connection: Client-1fc0b043-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:42,218 - distributed.nanny - INFO - Closing Nanny at 'tcp://192.168.1.159:45645'.
-2022-08-26 14:07:42,218 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:07:42,218 - distributed.worker - INFO - Stopping worker at tcp://192.168.1.159:34119
-2022-08-26 14:07:42,219 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-709f7cda-ddb9-4435-966d-230d44074ea1 Address tcp://192.168.1.159:34119 Status: Status.closing
-2022-08-26 14:07:42,220 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://192.168.1.159:34119', status: closing, memory: 0, processing: 0>
-2022-08-26 14:07:42,220 - distributed.core - INFO - Removing comms to tcp://192.168.1.159:34119
-2022-08-26 14:07:42,220 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:07:42,352 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:07:42,352 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_preload.py::test_web_preload 2022-08-26 14:07:42,358 - distributed.preloading - INFO - Creating preload: http://example.com/preload
-2022-08-26 14:07:42,358 - distributed.preloading - INFO - Downloading preload at http://example.com/preload
-2022-08-26 14:07:42,359 - distributed.scheduler - INFO - State start
-2022-08-26 14:07:42,361 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:07:42,361 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:39917
-2022-08-26 14:07:42,361 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:07:42,361 - distributed.preloading - INFO - Run preload setup: http://example.com/preload
-2022-08-26 14:07:42,361 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:07:42,361 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_preload.py::test_scheduler_startup 2022-08-26 14:07:42,367 - distributed.scheduler - INFO - State start
-2022-08-26 14:07:42,368 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:07:42,368 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:43107
-2022-08-26 14:07:42,368 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:46571
-2022-08-26 14:07:42,369 - distributed.preloading - INFO - Creating preload: 
-import dask
-dask.config.set(scheduler_address="tcp://127.0.0.1:43107")
-
-2022-08-26 14:07:42,371 - distributed.utils - INFO - Reload module tmp_djamdgg from .py file
-2022-08-26 14:07:42,372 - distributed.preloading - INFO - Import preload module: /tmp/tmp_djamdgg.py
-2022-08-26 14:07:42,374 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45555
-2022-08-26 14:07:42,374 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45555
-2022-08-26 14:07:42,374 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33277
-2022-08-26 14:07:42,374 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43107
-2022-08-26 14:07:42,374 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:42,374 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:07:42,374 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:07:42,374 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-re_2gen9
-2022-08-26 14:07:42,374 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:42,376 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45555', status: init, memory: 0, processing: 0>
-2022-08-26 14:07:42,376 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45555
-2022-08-26 14:07:42,376 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:42,377 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43107
-2022-08-26 14:07:42,377 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:42,377 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:45555
-2022-08-26 14:07:42,377 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:42,378 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-a82f7919-8a6b-4786-9795-32e6410d738c Address tcp://127.0.0.1:45555 Status: Status.closing
-2022-08-26 14:07:42,378 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:45555', status: closing, memory: 0, processing: 0>
-2022-08-26 14:07:42,378 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:45555
-2022-08-26 14:07:42,378 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:07:42,379 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:07:42,379 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:07:42,579 - distributed.utils_perf - WARNING - full garbage collections took 80% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_preload.py::test_scheduler_startup_nanny 2022-08-26 14:07:42,584 - distributed.scheduler - INFO - State start
-2022-08-26 14:07:42,586 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:07:42,586 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:44889
-2022-08-26 14:07:42,586 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:38179
-2022-08-26 14:07:42,586 - distributed.preloading - INFO - Creating preload: 
-import dask
-dask.config.set(scheduler_address="tcp://127.0.0.1:44889")
-
-2022-08-26 14:07:42,587 - distributed.utils - INFO - Reload module tmpa56en36f from .py file
-2022-08-26 14:07:42,588 - distributed.preloading - INFO - Import preload module: /tmp/tmpa56en36f.py
-2022-08-26 14:07:42,591 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:33397'
-2022-08-26 14:07:43,212 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36493
-2022-08-26 14:07:43,212 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36493
-2022-08-26 14:07:43,212 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39987
-2022-08-26 14:07:43,212 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44889
-2022-08-26 14:07:43,212 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:43,212 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:07:43,212 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:07:43,212 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-yfizw1ih
-2022-08-26 14:07:43,212 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:43,476 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36493', status: init, memory: 0, processing: 0>
-2022-08-26 14:07:43,477 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36493
-2022-08-26 14:07:43,477 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:43,477 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44889
-2022-08-26 14:07:43,477 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:43,478 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:43,524 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:33397'.
-2022-08-26 14:07:43,524 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:07:43,524 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36493
-2022-08-26 14:07:43,525 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-ad238c50-89c6-4125-8918-51e101c8c8aa Address tcp://127.0.0.1:36493 Status: Status.closing
-2022-08-26 14:07:43,525 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:36493', status: closing, memory: 0, processing: 0>
-2022-08-26 14:07:43,525 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:36493
-2022-08-26 14:07:43,525 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:07:43,655 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:07:43,655 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:07:43,854 - distributed.utils_perf - WARNING - full garbage collections took 80% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_preload.py::test_web_preload_worker 2022-08-26 14:07:43,860 - distributed.scheduler - INFO - State start
-2022-08-26 14:07:43,862 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:07:43,862 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:35367
-2022-08-26 14:07:43,862 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:07:43,863 - distributed.preloading - INFO - Creating preload: http://example.com/preload
-2022-08-26 14:07:43,863 - distributed.preloading - INFO - Downloading preload at http://example.com/preload
-2022-08-26 14:07:43,865 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:36773'
-2022-08-26 14:07:44,481 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39019
-2022-08-26 14:07:44,481 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39019
-2022-08-26 14:07:44,481 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45129
-2022-08-26 14:07:44,481 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35367
-2022-08-26 14:07:44,481 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:44,482 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:07:44,482 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:07:44,482 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-tn3ia787
-2022-08-26 14:07:44,482 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:44,747 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39019', status: init, memory: 0, processing: 0>
-2022-08-26 14:07:44,748 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39019
-2022-08-26 14:07:44,748 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:44,748 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35367
-2022-08-26 14:07:44,748 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:44,749 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:44,749 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:36773'.
-2022-08-26 14:07:44,749 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:07:44,750 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:39019
-2022-08-26 14:07:44,751 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-8c78bd26-fc1b-4a9a-bb06-9e9bbc07ed72 Address tcp://127.0.0.1:39019 Status: Status.closing
-2022-08-26 14:07:44,751 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:39019', status: closing, memory: 0, processing: 0>
-2022-08-26 14:07:44,751 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:39019
-2022-08-26 14:07:44,751 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:07:44,879 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:07:44,879 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_preload.py::test_client_preload_text 2022-08-26 14:07:44,885 - distributed.scheduler - INFO - State start
-2022-08-26 14:07:44,887 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:07:44,887 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:44137
-2022-08-26 14:07:44,887 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:42397
-2022-08-26 14:07:44,887 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:07:44,888 - distributed.scheduler - INFO - Scheduler closing all comms
-XFAIL (T...)
-distributed/tests/test_preload.py::test_client_preload_config 2022-08-26 14:07:44,952 - distributed.scheduler - INFO - State start
-2022-08-26 14:07:44,953 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:07:44,954 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:41875
-2022-08-26 14:07:44,954 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:43447
-2022-08-26 14:07:44,954 - distributed.preloading - INFO - Creating preload: def dask_setup(client):
-    client.foo = "setup"
-
-
-def dask_teardown(client):
-    client.foo = "teardown"
-
-2022-08-26 14:07:44,955 - distributed.utils - INFO - Reload module tmp6fjc3ptw from .py file
-2022-08-26 14:07:44,978 - distributed.preloading - INFO - Import preload module: /tmp/tmp6fjc3ptw.py
-2022-08-26 14:07:44,981 - distributed.scheduler - INFO - Receive client connection: Client-2164b3eb-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:44,982 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:44,982 - distributed.preloading - INFO - Run preload setup: def dask_setup(client):
-    client.foo = "setup"
-
-
-def dask_teardown(client):
-    client.foo = "teardown"
-
-2022-08-26 14:07:44,982 - distributed.preloading - INFO - Run preload teardown: def dask_setup(client):
-    client.foo = "setup"
-
-
-def dask_teardown(client):
-    client.foo = "teardown"
-
-2022-08-26 14:07:44,993 - distributed.scheduler - INFO - Remove client Client-2164b3eb-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:44,993 - distributed.scheduler - INFO - Remove client Client-2164b3eb-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:44,993 - distributed.scheduler - INFO - Close client connection: Client-2164b3eb-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:44,994 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:07:44,994 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:07:45,195 - distributed.utils_perf - WARNING - full garbage collections took 79% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_preload.py::test_client_preload_click 2022-08-26 14:07:45,200 - distributed.scheduler - INFO - State start
-2022-08-26 14:07:45,202 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:07:45,202 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:43193
-2022-08-26 14:07:45,202 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:33397
-2022-08-26 14:07:45,203 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:07:45,203 - distributed.scheduler - INFO - Scheduler closing all comms
-XFAIL (...)
-distributed/tests/test_preload.py::test_failure_doesnt_crash 2022-08-26 14:07:45,265 - distributed.preloading - INFO - Creating preload: 
-def dask_setup(worker):
-    raise Exception(123)
-
-def dask_teardown(worker):
-    raise Exception(456)
-
-2022-08-26 14:07:45,266 - distributed.utils - INFO - Reload module tmpuqtyn944 from .py file
-2022-08-26 14:07:45,289 - distributed.preloading - INFO - Import preload module: /tmp/tmpuqtyn944.py
-2022-08-26 14:07:45,310 - distributed.scheduler - INFO - State start
-2022-08-26 14:07:45,311 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:07:45,312 - distributed.scheduler - INFO -   Scheduler at: tcp://192.168.1.159:39547
-2022-08-26 14:07:45,312 - distributed.scheduler - INFO -   dashboard at:                    :41109
-2022-08-26 14:07:45,312 - distributed.preloading - INFO - Run preload setup: 
-def dask_setup(worker):
-    raise Exception(123)
-
-def dask_teardown(worker):
-    raise Exception(456)
-
-2022-08-26 14:07:45,312 - distributed.scheduler - ERROR - Failed to start preload
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/scheduler.py", line 3410, in start_unsafe
-    await preload.start()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/preloading.py", line 209, in start
-    future = dask_setup(self.dask_object)
-  File "/tmp/tmpuqtyn944.py", line 3, in dask_setup
-Exception: 123
-2022-08-26 14:07:45,313 - distributed.preloading - INFO - Creating preload: 
-def dask_setup(worker):
-    raise Exception(123)
-
-def dask_teardown(worker):
-    raise Exception(456)
-
-2022-08-26 14:07:45,313 - distributed.utils - INFO - Reload module tmpzxp43su0 from .py file
-2022-08-26 14:07:45,314 - distributed.preloading - INFO - Import preload module: /tmp/tmpzxp43su0.py
-2022-08-26 14:07:45,316 - distributed.preloading - INFO - Run preload setup: 
-def dask_setup(worker):
-    raise Exception(123)
-
-def dask_teardown(worker):
-    raise Exception(456)
-
-2022-08-26 14:07:45,316 - distributed.worker - ERROR - Failed to start preload
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 1356, in start_unsafe
-    await preload.start()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/preloading.py", line 209, in start
-    future = dask_setup(self.dask_object)
-  File "/tmp/dask-worker-space/worker-k8skmga1/tmpzxp43su0.py", line 3, in dask_setup
-    raise Exception(123)
-Exception: 123
-2022-08-26 14:07:45,316 - distributed.worker - INFO -       Start worker at:  tcp://192.168.1.159:35871
-2022-08-26 14:07:45,316 - distributed.worker - INFO -          Listening to:  tcp://192.168.1.159:35871
-2022-08-26 14:07:45,316 - distributed.worker - INFO -          dashboard at:        192.168.1.159:44643
-2022-08-26 14:07:45,316 - distributed.worker - INFO - Waiting to connect to:  tcp://192.168.1.159:39547
-2022-08-26 14:07:45,316 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:45,316 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:07:45,316 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:07:45,316 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-k8skmga1
-2022-08-26 14:07:45,317 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:45,318 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://192.168.1.159:35871', status: init, memory: 0, processing: 0>
-2022-08-26 14:07:45,319 - distributed.scheduler - INFO - Starting worker compute stream, tcp://192.168.1.159:35871
-2022-08-26 14:07:45,319 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:45,319 - distributed.worker - INFO -         Registered to:  tcp://192.168.1.159:39547
-2022-08-26 14:07:45,319 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:45,319 - distributed.worker - INFO - Stopping worker at tcp://192.168.1.159:35871
-2022-08-26 14:07:45,319 - distributed.preloading - INFO - Run preload teardown: 
-def dask_setup(worker):
-    raise Exception(123)
-
-def dask_teardown(worker):
-    raise Exception(456)
-
-2022-08-26 14:07:45,319 - distributed.worker - ERROR - Failed to tear down preload
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 1479, in close
-    await preload.teardown()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/preloading.py", line 218, in teardown
-    future = dask_teardown(self.dask_object)
-  File "/tmp/dask-worker-space/worker-k8skmga1/tmpzxp43su0.py", line 6, in dask_teardown
-    raise Exception(456)
-Exception: 456
-2022-08-26 14:07:45,320 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:45,320 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-1a0227ac-d166-4a12-a87c-0ac3136274ea Address tcp://192.168.1.159:35871 Status: Status.closing
-2022-08-26 14:07:45,320 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://192.168.1.159:35871', status: closing, memory: 0, processing: 0>
-2022-08-26 14:07:45,320 - distributed.core - INFO - Removing comms to tcp://192.168.1.159:35871
-2022-08-26 14:07:45,320 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:07:45,321 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:07:45,321 - distributed.preloading - INFO - Run preload teardown: 
-def dask_setup(worker):
-    raise Exception(123)
-
-def dask_teardown(worker):
-    raise Exception(456)
-
-2022-08-26 14:07:45,321 - distributed.scheduler - ERROR - Failed to tear down preload
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/scheduler.py", line 3462, in close
-    await preload.teardown()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/preloading.py", line 218, in teardown
-    future = dask_teardown(self.dask_object)
-  File "/tmp/tmpuqtyn944.py", line 6, in dask_teardown
-Exception: 456
-2022-08-26 14:07:45,321 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_preload.py::test_client_preload_config_click 2022-08-26 14:07:45,327 - distributed.scheduler - INFO - State start
-2022-08-26 14:07:45,329 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:07:45,329 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:44229
-2022-08-26 14:07:45,329 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:43445
-2022-08-26 14:07:45,329 - distributed.preloading - INFO - Creating preload: import click
-
-@click.command()
-@click.argument("value")
-def dask_setup(client, value):
-    client.foo = value
-
-2022-08-26 14:07:45,330 - distributed.utils - INFO - Reload module tmpukunn1a_ from .py file
-2022-08-26 14:07:45,353 - distributed.preloading - INFO - Import preload module: /tmp/tmpukunn1a_.py
-2022-08-26 14:07:45,357 - distributed.scheduler - INFO - Receive client connection: Client-219df01b-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:45,357 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:45,358 - distributed.preloading - INFO - Run preload setup: import click
-
-@click.command()
-@click.argument("value")
-def dask_setup(client, value):
-    client.foo = value
-
-2022-08-26 14:07:45,369 - distributed.scheduler - INFO - Remove client Client-219df01b-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:45,369 - distributed.scheduler - INFO - Remove client Client-219df01b-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:45,369 - distributed.scheduler - INFO - Close client connection: Client-219df01b-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:45,370 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:07:45,370 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:07:45,571 - distributed.utils_perf - WARNING - full garbage collections took 77% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_priorities.py::test_submit[queue on worker] 2022-08-26 14:07:45,577 - distributed.scheduler - INFO - State start
-2022-08-26 14:07:45,578 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:07:45,578 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:34607
-2022-08-26 14:07:45,579 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:42867
-2022-08-26 14:07:45,581 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:46749
-2022-08-26 14:07:45,581 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:46749
-2022-08-26 14:07:45,581 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:07:45,581 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43009
-2022-08-26 14:07:45,581 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34607
-2022-08-26 14:07:45,581 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:45,582 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:07:45,582 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:07:45,582 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-l_tq27vc
-2022-08-26 14:07:45,582 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:45,584 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:46749', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:07:45,584 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:46749
-2022-08-26 14:07:45,584 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:45,584 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34607
-2022-08-26 14:07:45,584 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:45,584 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:45,598 - distributed.scheduler - INFO - Receive client connection: Client-21c681c1-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:45,598 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:45,666 - distributed.scheduler - INFO - Remove client Client-21c681c1-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:45,666 - distributed.scheduler - INFO - Remove client Client-21c681c1-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:45,667 - distributed.scheduler - INFO - Close client connection: Client-21c681c1-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:45,667 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:46749
-2022-08-26 14:07:45,668 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:46749', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:07:45,668 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:46749
-2022-08-26 14:07:45,668 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:07:45,668 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-1cc45485-2ff2-47ec-b5de-4327a7205fc7 Address tcp://127.0.0.1:46749 Status: Status.closing
-2022-08-26 14:07:45,669 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:07:45,669 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:07:45,870 - distributed.utils_perf - WARNING - full garbage collections took 77% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_priorities.py::test_submit[queue on scheduler] 2022-08-26 14:07:45,876 - distributed.scheduler - INFO - State start
-2022-08-26 14:07:45,878 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:07:45,878 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:41445
-2022-08-26 14:07:45,878 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:44871
-2022-08-26 14:07:45,881 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40979
-2022-08-26 14:07:45,881 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40979
-2022-08-26 14:07:45,881 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:07:45,881 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36323
-2022-08-26 14:07:45,881 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:41445
-2022-08-26 14:07:45,881 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:45,881 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:07:45,881 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:07:45,881 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-crq9eelo
-2022-08-26 14:07:45,881 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:45,883 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40979', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:07:45,883 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40979
-2022-08-26 14:07:45,883 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:45,884 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:41445
-2022-08-26 14:07:45,884 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:45,884 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:45,897 - distributed.scheduler - INFO - Receive client connection: Client-21f43369-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:45,897 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:45,942 - distributed.scheduler - INFO - Remove client Client-21f43369-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:45,942 - distributed.scheduler - INFO - Remove client Client-21f43369-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:45,943 - distributed.scheduler - INFO - Close client connection: Client-21f43369-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:45,943 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:40979
-2022-08-26 14:07:45,944 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:40979', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:07:45,944 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:40979
-2022-08-26 14:07:45,944 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:07:45,944 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-3d3fc5fc-ed79-4b05-8e2e-9b3ce60eb568 Address tcp://127.0.0.1:40979 Status: Status.closing
-2022-08-26 14:07:45,945 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:07:45,945 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:07:46,146 - distributed.utils_perf - WARNING - full garbage collections took 77% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_priorities.py::test_map[queue on worker] 2022-08-26 14:07:46,152 - distributed.scheduler - INFO - State start
-2022-08-26 14:07:46,154 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:07:46,154 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:39673
-2022-08-26 14:07:46,154 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:45813
-2022-08-26 14:07:46,157 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35043
-2022-08-26 14:07:46,157 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35043
-2022-08-26 14:07:46,157 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:07:46,157 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42213
-2022-08-26 14:07:46,157 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39673
-2022-08-26 14:07:46,157 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:46,157 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:07:46,157 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:07:46,157 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-a2bz09dm
-2022-08-26 14:07:46,157 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:46,159 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:35043', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:07:46,160 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:35043
-2022-08-26 14:07:46,160 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:46,160 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39673
-2022-08-26 14:07:46,160 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:46,160 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:46,173 - distributed.scheduler - INFO - Receive client connection: Client-221e5aff-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:46,174 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:46,244 - distributed.scheduler - INFO - Remove client Client-221e5aff-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:46,244 - distributed.scheduler - INFO - Remove client Client-221e5aff-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:46,244 - distributed.scheduler - INFO - Close client connection: Client-221e5aff-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:46,245 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:35043
-2022-08-26 14:07:46,245 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:35043', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:07:46,245 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:35043
-2022-08-26 14:07:46,245 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:07:46,245 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-225f7469-b101-4d3a-986f-80270ea89f04 Address tcp://127.0.0.1:35043 Status: Status.closing
-2022-08-26 14:07:46,246 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:07:46,246 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:07:46,448 - distributed.utils_perf - WARNING - full garbage collections took 77% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_priorities.py::test_map[queue on scheduler] 2022-08-26 14:07:46,454 - distributed.scheduler - INFO - State start
-2022-08-26 14:07:46,456 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:07:46,456 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:41167
-2022-08-26 14:07:46,456 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:37841
-2022-08-26 14:07:46,459 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45439
-2022-08-26 14:07:46,459 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45439
-2022-08-26 14:07:46,459 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:07:46,459 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41487
-2022-08-26 14:07:46,459 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:41167
-2022-08-26 14:07:46,459 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:46,459 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:07:46,459 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:07:46,459 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-xh6dk0au
-2022-08-26 14:07:46,459 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:46,461 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45439', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:07:46,461 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45439
-2022-08-26 14:07:46,461 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:46,461 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:41167
-2022-08-26 14:07:46,461 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:46,462 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:46,475 - distributed.scheduler - INFO - Receive client connection: Client-224c5cc7-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:46,475 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:46,525 - distributed.scheduler - INFO - Remove client Client-224c5cc7-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:46,525 - distributed.scheduler - INFO - Remove client Client-224c5cc7-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:46,525 - distributed.scheduler - INFO - Close client connection: Client-224c5cc7-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:46,526 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:45439
-2022-08-26 14:07:46,527 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:45439', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:07:46,527 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:45439
-2022-08-26 14:07:46,527 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:07:46,527 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-c0bc796a-7b0f-4a52-9ad0-a49ab79c5326 Address tcp://127.0.0.1:45439 Status: Status.closing
-2022-08-26 14:07:46,527 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:07:46,528 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:07:46,730 - distributed.utils_perf - WARNING - full garbage collections took 77% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_priorities.py::test_compute[queue on worker] 2022-08-26 14:07:46,735 - distributed.scheduler - INFO - State start
-2022-08-26 14:07:46,737 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:07:46,737 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:38603
-2022-08-26 14:07:46,737 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:33563
-2022-08-26 14:07:46,740 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:44625
-2022-08-26 14:07:46,740 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:44625
-2022-08-26 14:07:46,740 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:07:46,740 - distributed.worker - INFO -          dashboard at:            127.0.0.1:44105
-2022-08-26 14:07:46,740 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:38603
-2022-08-26 14:07:46,740 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:46,740 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:07:46,740 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:07:46,740 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-aplilx7r
-2022-08-26 14:07:46,740 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:46,742 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:44625', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:07:46,743 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:44625
-2022-08-26 14:07:46,743 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:46,743 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:38603
-2022-08-26 14:07:46,743 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:46,743 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:46,756 - distributed.scheduler - INFO - Receive client connection: Client-22775239-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:46,757 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:46,824 - distributed.scheduler - INFO - Remove client Client-22775239-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:46,824 - distributed.scheduler - INFO - Remove client Client-22775239-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:46,825 - distributed.scheduler - INFO - Close client connection: Client-22775239-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:46,825 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:44625
-2022-08-26 14:07:46,826 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:44625', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:07:46,826 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:44625
-2022-08-26 14:07:46,826 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:07:46,826 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-4a0b2c03-09bb-4156-85ba-19ece3512801 Address tcp://127.0.0.1:44625 Status: Status.closing
-2022-08-26 14:07:46,827 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:07:46,827 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:07:47,029 - distributed.utils_perf - WARNING - full garbage collections took 78% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_priorities.py::test_compute[queue on scheduler] 2022-08-26 14:07:47,035 - distributed.scheduler - INFO - State start
-2022-08-26 14:07:47,036 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:07:47,037 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:42909
-2022-08-26 14:07:47,037 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:45971
-2022-08-26 14:07:47,039 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45163
-2022-08-26 14:07:47,039 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45163
-2022-08-26 14:07:47,039 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:07:47,039 - distributed.worker - INFO -          dashboard at:            127.0.0.1:44325
-2022-08-26 14:07:47,040 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:42909
-2022-08-26 14:07:47,040 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:47,040 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:07:47,040 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:07:47,040 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-menhuxv9
-2022-08-26 14:07:47,040 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:47,042 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45163', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:07:47,042 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45163
-2022-08-26 14:07:47,042 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:47,042 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:42909
-2022-08-26 14:07:47,042 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:47,042 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:47,056 - distributed.scheduler - INFO - Receive client connection: Client-22a4fc7d-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:47,056 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:47,101 - distributed.scheduler - INFO - Remove client Client-22a4fc7d-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:47,102 - distributed.scheduler - INFO - Remove client Client-22a4fc7d-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:47,102 - distributed.scheduler - INFO - Close client connection: Client-22a4fc7d-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:47,102 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:45163
-2022-08-26 14:07:47,103 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:45163', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:07:47,103 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:45163
-2022-08-26 14:07:47,103 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:07:47,103 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-547a803f-11d7-4411-b358-8c2148c60030 Address tcp://127.0.0.1:45163 Status: Status.closing
-2022-08-26 14:07:47,104 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:07:47,104 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:07:47,307 - distributed.utils_perf - WARNING - full garbage collections took 79% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_priorities.py::test_persist[queue on worker] 2022-08-26 14:07:47,313 - distributed.scheduler - INFO - State start
-2022-08-26 14:07:47,315 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:07:47,315 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:40195
-2022-08-26 14:07:47,315 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:42887
-2022-08-26 14:07:47,318 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:44781
-2022-08-26 14:07:47,318 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:44781
-2022-08-26 14:07:47,318 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:07:47,318 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43731
-2022-08-26 14:07:47,318 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:40195
-2022-08-26 14:07:47,318 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:47,318 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:07:47,318 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:07:47,318 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-g7v8m7kv
-2022-08-26 14:07:47,318 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:47,320 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:44781', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:07:47,321 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:44781
-2022-08-26 14:07:47,321 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:47,321 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:40195
-2022-08-26 14:07:47,321 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:47,321 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:47,334 - distributed.scheduler - INFO - Receive client connection: Client-22cf81ff-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:47,335 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:47,403 - distributed.scheduler - INFO - Remove client Client-22cf81ff-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:47,404 - distributed.scheduler - INFO - Remove client Client-22cf81ff-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:47,404 - distributed.scheduler - INFO - Close client connection: Client-22cf81ff-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:47,404 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:44781
-2022-08-26 14:07:47,405 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:44781', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:07:47,405 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:44781
-2022-08-26 14:07:47,405 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:07:47,405 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-557e9e4c-c93c-43a8-8333-726cacf93a51 Address tcp://127.0.0.1:44781 Status: Status.closing
-2022-08-26 14:07:47,406 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:07:47,406 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:07:47,608 - distributed.utils_perf - WARNING - full garbage collections took 79% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_priorities.py::test_persist[queue on scheduler] 2022-08-26 14:07:47,614 - distributed.scheduler - INFO - State start
-2022-08-26 14:07:47,616 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:07:47,616 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:39821
-2022-08-26 14:07:47,616 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:44741
-2022-08-26 14:07:47,619 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34649
-2022-08-26 14:07:47,619 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34649
-2022-08-26 14:07:47,619 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:07:47,619 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40241
-2022-08-26 14:07:47,619 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39821
-2022-08-26 14:07:47,619 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:47,619 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:07:47,619 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:07:47,619 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-rsxk7wsx
-2022-08-26 14:07:47,619 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:47,621 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:34649', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:07:47,621 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:34649
-2022-08-26 14:07:47,621 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:47,622 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39821
-2022-08-26 14:07:47,622 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:47,622 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:47,635 - distributed.scheduler - INFO - Receive client connection: Client-22fd6ad4-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:47,636 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:47,681 - distributed.scheduler - INFO - Remove client Client-22fd6ad4-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:47,681 - distributed.scheduler - INFO - Remove client Client-22fd6ad4-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:47,682 - distributed.scheduler - INFO - Close client connection: Client-22fd6ad4-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:47,682 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:34649
-2022-08-26 14:07:47,683 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:34649', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:07:47,683 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:34649
-2022-08-26 14:07:47,683 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:07:47,683 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-67838701-ae53-417c-b564-f7cb23b60fea Address tcp://127.0.0.1:34649 Status: Status.closing
-2022-08-26 14:07:47,684 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:07:47,684 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:07:47,885 - distributed.utils_perf - WARNING - full garbage collections took 79% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_priorities.py::test_annotate_compute[queue on worker] 2022-08-26 14:07:47,891 - distributed.scheduler - INFO - State start
-2022-08-26 14:07:47,893 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:07:47,893 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:37175
-2022-08-26 14:07:47,893 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:34151
-2022-08-26 14:07:47,896 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:44451
-2022-08-26 14:07:47,896 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:44451
-2022-08-26 14:07:47,896 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:07:47,896 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40117
-2022-08-26 14:07:47,896 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37175
-2022-08-26 14:07:47,896 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:47,896 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:07:47,896 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:07:47,896 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-qby9ac8i
-2022-08-26 14:07:47,896 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:47,898 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:44451', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:07:47,898 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:44451
-2022-08-26 14:07:47,898 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:47,899 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37175
-2022-08-26 14:07:47,899 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:47,899 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:47,912 - distributed.scheduler - INFO - Receive client connection: Client-2327b07c-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:47,913 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:47,970 - distributed.scheduler - INFO - Remove client Client-2327b07c-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:47,970 - distributed.scheduler - INFO - Remove client Client-2327b07c-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:47,970 - distributed.scheduler - INFO - Close client connection: Client-2327b07c-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:47,971 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:44451
-2022-08-26 14:07:47,971 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:44451', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:07:47,971 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:44451
-2022-08-26 14:07:47,971 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:07:47,972 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-96b441de-4fac-4f11-bac1-9cbea012c62a Address tcp://127.0.0.1:44451 Status: Status.closing
-2022-08-26 14:07:47,972 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:07:47,972 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:07:48,174 - distributed.utils_perf - WARNING - full garbage collections took 79% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_priorities.py::test_annotate_compute[queue on scheduler] 2022-08-26 14:07:48,180 - distributed.scheduler - INFO - State start
-2022-08-26 14:07:48,181 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:07:48,182 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:37723
-2022-08-26 14:07:48,182 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:42277
-2022-08-26 14:07:48,184 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38675
-2022-08-26 14:07:48,184 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38675
-2022-08-26 14:07:48,184 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:07:48,184 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36355
-2022-08-26 14:07:48,185 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37723
-2022-08-26 14:07:48,185 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:48,185 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:07:48,185 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:07:48,185 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-lxz5po98
-2022-08-26 14:07:48,185 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:48,187 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38675', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:07:48,187 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38675
-2022-08-26 14:07:48,187 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:48,187 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37723
-2022-08-26 14:07:48,187 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:48,187 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:48,201 - distributed.scheduler - INFO - Receive client connection: Client-2353b318-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:48,201 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:48,244 - distributed.scheduler - INFO - Remove client Client-2353b318-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:48,244 - distributed.scheduler - INFO - Remove client Client-2353b318-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:48,244 - distributed.scheduler - INFO - Close client connection: Client-2353b318-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:48,245 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:38675
-2022-08-26 14:07:48,246 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:38675', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:07:48,246 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:38675
-2022-08-26 14:07:48,246 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:07:48,246 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-baa300d0-8eaf-48a5-8fde-00a9b23c09bf Address tcp://127.0.0.1:38675 Status: Status.closing
-2022-08-26 14:07:48,246 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:07:48,247 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:07:48,448 - distributed.utils_perf - WARNING - full garbage collections took 79% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_priorities.py::test_annotate_persist[queue on worker] 2022-08-26 14:07:48,454 - distributed.scheduler - INFO - State start
-2022-08-26 14:07:48,456 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:07:48,456 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:33419
-2022-08-26 14:07:48,456 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:33621
-2022-08-26 14:07:48,459 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:46211
-2022-08-26 14:07:48,459 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:46211
-2022-08-26 14:07:48,459 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:07:48,459 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40635
-2022-08-26 14:07:48,459 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33419
-2022-08-26 14:07:48,459 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:48,459 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:07:48,459 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:07:48,459 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-u1jrbra9
-2022-08-26 14:07:48,459 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:48,461 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:46211', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:07:48,461 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:46211
-2022-08-26 14:07:48,461 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:48,462 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33419
-2022-08-26 14:07:48,462 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:48,462 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:48,475 - distributed.scheduler - INFO - Receive client connection: Client-237d92e5-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:48,475 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:48,532 - distributed.scheduler - INFO - Remove client Client-237d92e5-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:48,533 - distributed.scheduler - INFO - Remove client Client-237d92e5-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:48,533 - distributed.scheduler - INFO - Close client connection: Client-237d92e5-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:48,533 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:46211
-2022-08-26 14:07:48,534 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:46211', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:07:48,534 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:46211
-2022-08-26 14:07:48,534 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:07:48,534 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-64e43166-aa80-40b9-b919-54bc2e953b74 Address tcp://127.0.0.1:46211 Status: Status.closing
-2022-08-26 14:07:48,535 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:07:48,535 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:07:48,737 - distributed.utils_perf - WARNING - full garbage collections took 79% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_priorities.py::test_annotate_persist[queue on scheduler] 2022-08-26 14:07:48,743 - distributed.scheduler - INFO - State start
-2022-08-26 14:07:48,744 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:07:48,745 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:43745
-2022-08-26 14:07:48,745 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:46583
-2022-08-26 14:07:48,747 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38617
-2022-08-26 14:07:48,747 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38617
-2022-08-26 14:07:48,747 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:07:48,747 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36771
-2022-08-26 14:07:48,748 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43745
-2022-08-26 14:07:48,748 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:48,748 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:07:48,748 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:07:48,748 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-_d0meqwd
-2022-08-26 14:07:48,748 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:48,749 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38617', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:07:48,750 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38617
-2022-08-26 14:07:48,750 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:48,750 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43745
-2022-08-26 14:07:48,750 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:48,750 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:48,764 - distributed.scheduler - INFO - Receive client connection: Client-23a99912-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:48,764 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:48,808 - distributed.scheduler - INFO - Remove client Client-23a99912-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:48,808 - distributed.scheduler - INFO - Remove client Client-23a99912-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:48,808 - distributed.scheduler - INFO - Close client connection: Client-23a99912-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:48,809 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:38617
-2022-08-26 14:07:48,809 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:38617', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:07:48,809 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:38617
-2022-08-26 14:07:48,810 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:07:48,810 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-33f69c76-e845-4b6f-968f-220af39ed814 Address tcp://127.0.0.1:38617 Status: Status.closing
-2022-08-26 14:07:48,810 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:07:48,810 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:07:49,012 - distributed.utils_perf - WARNING - full garbage collections took 79% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_priorities.py::test_repeated_persists_same_priority[queue on worker] 2022-08-26 14:07:49,018 - distributed.scheduler - INFO - State start
-2022-08-26 14:07:49,019 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:07:49,020 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:44315
-2022-08-26 14:07:49,020 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:35977
-2022-08-26 14:07:49,022 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36661
-2022-08-26 14:07:49,022 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36661
-2022-08-26 14:07:49,022 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:07:49,022 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39451
-2022-08-26 14:07:49,022 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44315
-2022-08-26 14:07:49,023 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:49,023 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:07:49,023 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:07:49,023 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-_bjdk8pw
-2022-08-26 14:07:49,023 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:49,024 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36661', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:07:49,025 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36661
-2022-08-26 14:07:49,025 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:49,025 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44315
-2022-08-26 14:07:49,025 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:49,025 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:49,039 - distributed.scheduler - INFO - Receive client connection: Client-23d38e82-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:49,039 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:49,458 - distributed.scheduler - INFO - Remove client Client-23d38e82-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:49,458 - distributed.scheduler - INFO - Remove client Client-23d38e82-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:49,458 - distributed.scheduler - INFO - Close client connection: Client-23d38e82-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:49,459 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36661
-2022-08-26 14:07:49,460 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-58023b87-6d72-489d-9b87-5a90908e3e61 Address tcp://127.0.0.1:36661 Status: Status.closing
-2022-08-26 14:07:49,460 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:36661', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:07:49,460 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:36661
-2022-08-26 14:07:49,460 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:07:49,491 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:07:49,491 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:07:49,693 - distributed.utils_perf - WARNING - full garbage collections took 79% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_priorities.py::test_repeated_persists_same_priority[queue on scheduler] 2022-08-26 14:07:49,699 - distributed.scheduler - INFO - State start
-2022-08-26 14:07:49,701 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:07:49,701 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:38707
-2022-08-26 14:07:49,701 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:39805
-2022-08-26 14:07:49,704 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42681
-2022-08-26 14:07:49,704 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42681
-2022-08-26 14:07:49,704 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:07:49,704 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40983
-2022-08-26 14:07:49,704 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:38707
-2022-08-26 14:07:49,704 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:49,704 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:07:49,704 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:07:49,704 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-m9n99u0g
-2022-08-26 14:07:49,704 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:49,706 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42681', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:07:49,706 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42681
-2022-08-26 14:07:49,706 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:49,707 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:38707
-2022-08-26 14:07:49,707 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:49,707 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:49,720 - distributed.scheduler - INFO - Receive client connection: Client-243b8dc1-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:49,720 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:50,131 - distributed.scheduler - INFO - Remove client Client-243b8dc1-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:50,131 - distributed.scheduler - INFO - Remove client Client-243b8dc1-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:50,132 - distributed.scheduler - INFO - Close client connection: Client-243b8dc1-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:50,132 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42681
-2022-08-26 14:07:50,133 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-68b9fc7d-fb0b-41c0-b21a-92abab592c8f Address tcp://127.0.0.1:42681 Status: Status.closing
-2022-08-26 14:07:50,134 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:42681', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:07:50,134 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:42681
-2022-08-26 14:07:50,134 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:07:50,161 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:07:50,161 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:07:50,364 - distributed.utils_perf - WARNING - full garbage collections took 79% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_priorities.py::test_last_in_first_out[queue on worker] 2022-08-26 14:07:50,370 - distributed.scheduler - INFO - State start
-2022-08-26 14:07:50,372 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:07:50,372 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:38553
-2022-08-26 14:07:50,372 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:40903
-2022-08-26 14:07:50,375 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37605
-2022-08-26 14:07:50,375 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37605
-2022-08-26 14:07:50,375 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:07:50,375 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35615
-2022-08-26 14:07:50,375 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:38553
-2022-08-26 14:07:50,375 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:50,375 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:07:50,375 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:07:50,375 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-5_d8vb66
-2022-08-26 14:07:50,375 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:50,377 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37605', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:07:50,377 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37605
-2022-08-26 14:07:50,377 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:50,377 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:38553
-2022-08-26 14:07:50,377 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:50,378 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:50,391 - distributed.scheduler - INFO - Receive client connection: Client-24a1e8c2-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:50,391 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:50,710 - distributed.scheduler - INFO - Remove client Client-24a1e8c2-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:50,711 - distributed.scheduler - INFO - Remove client Client-24a1e8c2-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:50,711 - distributed.scheduler - INFO - Close client connection: Client-24a1e8c2-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:50,711 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:37605
-2022-08-26 14:07:50,712 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-5280f5b0-5111-4ee2-a651-8d93b7ed5cee Address tcp://127.0.0.1:37605 Status: Status.closing
-2022-08-26 14:07:50,713 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:37605', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:07:50,713 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:37605
-2022-08-26 14:07:50,713 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:07:50,746 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:07:50,747 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:07:50,949 - distributed.utils_perf - WARNING - full garbage collections took 78% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_priorities.py::test_last_in_first_out[queue on scheduler] 2022-08-26 14:07:50,955 - distributed.scheduler - INFO - State start
-2022-08-26 14:07:50,956 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:07:50,957 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:36859
-2022-08-26 14:07:50,957 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:34127
-2022-08-26 14:07:50,959 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:41835
-2022-08-26 14:07:50,959 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:41835
-2022-08-26 14:07:50,959 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:07:50,959 - distributed.worker - INFO -          dashboard at:            127.0.0.1:37241
-2022-08-26 14:07:50,959 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36859
-2022-08-26 14:07:50,959 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:50,960 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:07:50,960 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:07:50,960 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-t31719ou
-2022-08-26 14:07:50,960 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:50,962 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:41835', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:07:50,962 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:41835
-2022-08-26 14:07:50,962 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:50,962 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36859
-2022-08-26 14:07:50,962 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:50,962 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:50,976 - distributed.scheduler - INFO - Receive client connection: Client-24fb2065-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:50,976 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:51,287 - distributed.scheduler - INFO - Remove client Client-24fb2065-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:51,288 - distributed.scheduler - INFO - Remove client Client-24fb2065-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:51,288 - distributed.scheduler - INFO - Close client connection: Client-24fb2065-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:51,288 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:41835
-2022-08-26 14:07:51,289 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-5018164d-4cb7-4b28-b02c-1547e750fef8 Address tcp://127.0.0.1:41835 Status: Status.closing
-2022-08-26 14:07:51,289 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:41835', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:07:51,290 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:41835
-2022-08-26 14:07:51,290 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:07:51,317 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:07:51,317 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:07:51,520 - distributed.utils_perf - WARNING - full garbage collections took 78% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_profile.py::test_basic PASSED
-distributed/tests/test_profile.py::test_basic_low_level SKIPPED (cou...)
-distributed/tests/test_profile.py::test_merge PASSED
-distributed/tests/test_profile.py::test_merge_empty PASSED
-distributed/tests/test_profile.py::test_call_stack PASSED
-distributed/tests/test_profile.py::test_identifier PASSED
-distributed/tests/test_profile.py::test_watch PASSED
-distributed/tests/test_profile.py::test_watch_requires_lock_to_run PASSED
-distributed/tests/test_profile.py::test_info_frame_f_lineno[-1-1] PASSED
-distributed/tests/test_profile.py::test_info_frame_f_lineno[0-2] PASSED
-distributed/tests/test_profile.py::test_info_frame_f_lineno[1-2] PASSED
-distributed/tests/test_profile.py::test_info_frame_f_lineno[11-2] PASSED
-distributed/tests/test_profile.py::test_info_frame_f_lineno[12-3] PASSED
-distributed/tests/test_profile.py::test_info_frame_f_lineno[21-4] PASSED
-distributed/tests/test_profile.py::test_info_frame_f_lineno[22-4] PASSED
-distributed/tests/test_profile.py::test_info_frame_f_lineno[23-4] PASSED
-distributed/tests/test_profile.py::test_info_frame_f_lineno[24-2] PASSED
-distributed/tests/test_profile.py::test_info_frame_f_lineno[25-2] PASSED
-distributed/tests/test_profile.py::test_info_frame_f_lineno[26-2] PASSED
-distributed/tests/test_profile.py::test_info_frame_f_lineno[27-2] PASSED
-distributed/tests/test_profile.py::test_info_frame_f_lineno[100-2] PASSED
-distributed/tests/test_profile.py::test_call_stack_f_lineno[-1-1] PASSED
-distributed/tests/test_profile.py::test_call_stack_f_lineno[0-2] PASSED
-distributed/tests/test_profile.py::test_call_stack_f_lineno[1-2] PASSED
-distributed/tests/test_profile.py::test_call_stack_f_lineno[11-2] PASSED
-distributed/tests/test_profile.py::test_call_stack_f_lineno[12-3] PASSED
-distributed/tests/test_profile.py::test_call_stack_f_lineno[21-4] PASSED
-distributed/tests/test_profile.py::test_call_stack_f_lineno[22-4] PASSED
-distributed/tests/test_profile.py::test_call_stack_f_lineno[23-4] PASSED
-distributed/tests/test_profile.py::test_call_stack_f_lineno[24-2] PASSED
-distributed/tests/test_profile.py::test_call_stack_f_lineno[25-2] PASSED
-distributed/tests/test_profile.py::test_call_stack_f_lineno[26-2] PASSED
-distributed/tests/test_profile.py::test_call_stack_f_lineno[27-2] PASSED
-distributed/tests/test_profile.py::test_call_stack_f_lineno[100-2] PASSED
-distributed/tests/test_profile.py::test_stack_overflow FAILED
-distributed/tests/test_publish.py::test_publish_simple 2022-08-26 14:07:55,026 - distributed.scheduler - INFO - State start
-2022-08-26 14:07:55,028 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:07:55,028 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:44725
-2022-08-26 14:07:55,028 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:40537
-2022-08-26 14:07:55,033 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:46527
-2022-08-26 14:07:55,033 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:46527
-2022-08-26 14:07:55,033 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:07:55,033 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36955
-2022-08-26 14:07:55,033 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44725
-2022-08-26 14:07:55,033 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:55,033 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:07:55,033 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:07:55,033 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-k20zbkfe
-2022-08-26 14:07:55,033 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:55,034 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43917
-2022-08-26 14:07:55,034 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43917
-2022-08-26 14:07:55,034 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:07:55,034 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46419
-2022-08-26 14:07:55,034 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44725
-2022-08-26 14:07:55,034 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:55,034 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:07:55,034 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:07:55,034 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-__eh5g5a
-2022-08-26 14:07:55,034 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:55,037 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:46527', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:07:55,037 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:46527
-2022-08-26 14:07:55,037 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:55,038 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:43917', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:07:55,038 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:43917
-2022-08-26 14:07:55,038 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:55,038 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44725
-2022-08-26 14:07:55,038 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:55,038 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44725
-2022-08-26 14:07:55,038 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:55,039 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:55,039 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:55,056 - distributed.scheduler - INFO - Receive client connection: Client-27692c47-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:55,056 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:55,056 - distributed.scheduler - INFO - Receive client connection: Client-2769329c-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:55,056 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:55,064 - distributed.core - ERROR - 'Dataset data already exists'
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/utils.py", line 805, in wrapper
-    return func(*args, **kwargs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/publish.py", line 35, in put
-    raise KeyError("Dataset %s already exists" % name)
-KeyError: 'Dataset data already exists'
-2022-08-26 14:07:55,064 - distributed.core - ERROR - Exception while handling op publish_put
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 768, in _handle_comm
-    result = handler(**msg)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/utils.py", line 805, in wrapper
-    return func(*args, **kwargs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/publish.py", line 35, in put
-    raise KeyError("Dataset %s already exists" % name)
-KeyError: 'Dataset data already exists'
-2022-08-26 14:07:55,068 - distributed.scheduler - INFO - Remove client Client-27692c47-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:55,068 - distributed.scheduler - INFO - Remove client Client-2769329c-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:55,068 - distributed.scheduler - INFO - Remove client Client-27692c47-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:55,069 - distributed.scheduler - INFO - Remove client Client-2769329c-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:55,069 - distributed.scheduler - INFO - Close client connection: Client-27692c47-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:55,069 - distributed.scheduler - INFO - Close client connection: Client-2769329c-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:55,070 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:46527
-2022-08-26 14:07:55,070 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:43917
-2022-08-26 14:07:55,071 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:46527', name: 0, status: closing, memory: 1, processing: 0>
-2022-08-26 14:07:55,071 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:46527
-2022-08-26 14:07:55,071 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:43917', name: 1, status: closing, memory: 2, processing: 0>
-2022-08-26 14:07:55,071 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:43917
-2022-08-26 14:07:55,071 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:07:55,072 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-59924fbf-5488-431e-bb1e-8472995a8f13 Address tcp://127.0.0.1:46527 Status: Status.closing
-2022-08-26 14:07:55,072 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-6da3dbda-fead-46c5-8d3c-1ca46ceffd64 Address tcp://127.0.0.1:43917 Status: Status.closing
-2022-08-26 14:07:55,073 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:07:55,073 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:07:55,275 - distributed.utils_perf - WARNING - full garbage collections took 73% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_publish.py::test_publish_non_string_key 2022-08-26 14:07:55,281 - distributed.scheduler - INFO - State start
-2022-08-26 14:07:55,282 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:07:55,282 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:43431
-2022-08-26 14:07:55,282 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:46399
-2022-08-26 14:07:55,287 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45657
-2022-08-26 14:07:55,287 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45657
-2022-08-26 14:07:55,287 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:07:55,287 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35651
-2022-08-26 14:07:55,287 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43431
-2022-08-26 14:07:55,287 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:55,287 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:07:55,287 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:07:55,287 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-719ha9rk
-2022-08-26 14:07:55,287 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:55,288 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45011
-2022-08-26 14:07:55,288 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45011
-2022-08-26 14:07:55,288 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:07:55,288 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43063
-2022-08-26 14:07:55,288 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43431
-2022-08-26 14:07:55,288 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:55,288 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:07:55,288 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:07:55,288 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-4x8xwgrw
-2022-08-26 14:07:55,288 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:55,291 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45657', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:07:55,291 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45657
-2022-08-26 14:07:55,291 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:55,291 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45011', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:07:55,292 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45011
-2022-08-26 14:07:55,292 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:55,292 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43431
-2022-08-26 14:07:55,292 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:55,292 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43431
-2022-08-26 14:07:55,292 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:55,293 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:55,293 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:55,306 - distributed.scheduler - INFO - Receive client connection: Client-278fea8a-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:55,306 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:55,319 - distributed.scheduler - INFO - Remove client Client-278fea8a-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:55,319 - distributed.scheduler - INFO - Remove client Client-278fea8a-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:55,319 - distributed.scheduler - INFO - Close client connection: Client-278fea8a-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:55,320 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:45657
-2022-08-26 14:07:55,320 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:45011
-2022-08-26 14:07:55,321 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:45657', name: 0, status: closing, memory: 1, processing: 0>
-2022-08-26 14:07:55,321 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:45657
-2022-08-26 14:07:55,322 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:45011', name: 1, status: closing, memory: 2, processing: 0>
-2022-08-26 14:07:55,322 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:45011
-2022-08-26 14:07:55,322 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:07:55,322 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-5a4247cf-0d9b-4c60-98bf-4e5b836107db Address tcp://127.0.0.1:45657 Status: Status.closing
-2022-08-26 14:07:55,322 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-ff15d94e-8034-4279-95aa-85214922a4ec Address tcp://127.0.0.1:45011 Status: Status.closing
-2022-08-26 14:07:55,323 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:07:55,323 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:07:55,525 - distributed.utils_perf - WARNING - full garbage collections took 73% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_publish.py::test_publish_roundtrip 2022-08-26 14:07:55,531 - distributed.scheduler - INFO - State start
-2022-08-26 14:07:55,533 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:07:55,533 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:34571
-2022-08-26 14:07:55,533 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:36579
-2022-08-26 14:07:55,537 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43041
-2022-08-26 14:07:55,537 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43041
-2022-08-26 14:07:55,537 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:07:55,537 - distributed.worker - INFO -          dashboard at:            127.0.0.1:32777
-2022-08-26 14:07:55,537 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34571
-2022-08-26 14:07:55,537 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:55,537 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:07:55,537 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:07:55,538 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-mmh9r9fq
-2022-08-26 14:07:55,538 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:55,538 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36861
-2022-08-26 14:07:55,538 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36861
-2022-08-26 14:07:55,538 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:07:55,538 - distributed.worker - INFO -          dashboard at:            127.0.0.1:38623
-2022-08-26 14:07:55,538 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34571
-2022-08-26 14:07:55,538 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:55,538 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:07:55,538 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:07:55,538 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-4_t3jp_v
-2022-08-26 14:07:55,539 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:55,541 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:43041', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:07:55,542 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:43041
-2022-08-26 14:07:55,542 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:55,542 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36861', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:07:55,542 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36861
-2022-08-26 14:07:55,542 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:55,543 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34571
-2022-08-26 14:07:55,543 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:55,543 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34571
-2022-08-26 14:07:55,543 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:55,543 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:55,543 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:55,557 - distributed.scheduler - INFO - Receive client connection: Client-27b62897-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:55,557 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:55,560 - distributed.scheduler - INFO - Receive client connection: Client-27b6add7-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:55,561 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:55,577 - distributed.scheduler - INFO - Remove client Client-27b62897-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:55,577 - distributed.scheduler - INFO - Remove client Client-27b62897-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:55,578 - distributed.scheduler - INFO - Close client connection: Client-27b62897-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:55,583 - distributed.scheduler - INFO - Remove client Client-27b6add7-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:55,583 - distributed.scheduler - INFO - Remove client Client-27b6add7-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:55,583 - distributed.scheduler - INFO - Close client connection: Client-27b6add7-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:55,584 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:43041
-2022-08-26 14:07:55,584 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36861
-2022-08-26 14:07:55,585 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:43041', name: 0, status: closing, memory: 1, processing: 0>
-2022-08-26 14:07:55,585 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:43041
-2022-08-26 14:07:55,585 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:36861', name: 1, status: closing, memory: 2, processing: 0>
-2022-08-26 14:07:55,585 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:36861
-2022-08-26 14:07:55,585 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:07:55,585 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-fbf0a962-0509-43aa-a459-68444752c70f Address tcp://127.0.0.1:43041 Status: Status.closing
-2022-08-26 14:07:55,586 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-5b9d4a79-4723-40f2-beb1-b7ebe4aea76d Address tcp://127.0.0.1:36861 Status: Status.closing
-2022-08-26 14:07:55,586 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:07:55,587 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:07:55,788 - distributed.utils_perf - WARNING - full garbage collections took 73% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_publish.py::test_unpublish 2022-08-26 14:07:55,794 - distributed.scheduler - INFO - State start
-2022-08-26 14:07:55,796 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:07:55,796 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:32971
-2022-08-26 14:07:55,796 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:38767
-2022-08-26 14:07:55,801 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33799
-2022-08-26 14:07:55,801 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33799
-2022-08-26 14:07:55,801 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:07:55,801 - distributed.worker - INFO -          dashboard at:            127.0.0.1:44527
-2022-08-26 14:07:55,801 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:32971
-2022-08-26 14:07:55,801 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:55,801 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:07:55,801 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:07:55,801 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-o41oyia8
-2022-08-26 14:07:55,801 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:55,801 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:41487
-2022-08-26 14:07:55,802 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:41487
-2022-08-26 14:07:55,802 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:07:55,802 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45679
-2022-08-26 14:07:55,802 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:32971
-2022-08-26 14:07:55,802 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:55,802 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:07:55,802 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:07:55,802 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-pdcallah
-2022-08-26 14:07:55,802 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:55,805 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33799', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:07:55,805 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33799
-2022-08-26 14:07:55,805 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:55,805 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:41487', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:07:55,806 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:41487
-2022-08-26 14:07:55,806 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:55,806 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:32971
-2022-08-26 14:07:55,806 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:55,806 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:32971
-2022-08-26 14:07:55,806 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:55,807 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:55,807 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:55,820 - distributed.scheduler - INFO - Receive client connection: Client-27de57b9-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:55,820 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:55,842 - distributed.scheduler - INFO - Remove client Client-27de57b9-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:55,843 - distributed.scheduler - INFO - Remove client Client-27de57b9-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:55,843 - distributed.scheduler - INFO - Close client connection: Client-27de57b9-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:55,843 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:33799
-2022-08-26 14:07:55,844 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:41487
-2022-08-26 14:07:55,844 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:33799', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:07:55,845 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:33799
-2022-08-26 14:07:55,845 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:41487', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:07:55,845 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:41487
-2022-08-26 14:07:55,845 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:07:55,845 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-d99cabb2-1395-4502-96d2-f32858a01f5b Address tcp://127.0.0.1:33799 Status: Status.closing
-2022-08-26 14:07:55,845 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-af33b044-712b-44a6-bb22-53c576c469c5 Address tcp://127.0.0.1:41487 Status: Status.closing
-2022-08-26 14:07:55,846 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:07:55,846 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:07:56,048 - distributed.utils_perf - WARNING - full garbage collections took 73% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_publish.py::test_unpublish_sync 2022-08-26 14:07:56,912 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:07:56,914 - distributed.scheduler - INFO - State start
-2022-08-26 14:07:56,917 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:07:56,918 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:41535
-2022-08-26 14:07:56,918 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:07:56,932 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40119
-2022-08-26 14:07:56,932 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40119
-2022-08-26 14:07:56,932 - distributed.worker - INFO -          dashboard at:            127.0.0.1:44745
-2022-08-26 14:07:56,932 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:41535
-2022-08-26 14:07:56,932 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:56,932 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:07:56,932 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:07:56,932 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-gljaohth
-2022-08-26 14:07:56,932 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:56,964 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:41785
-2022-08-26 14:07:56,964 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:41785
-2022-08-26 14:07:56,964 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39251
-2022-08-26 14:07:56,964 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:41535
-2022-08-26 14:07:56,964 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:56,964 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:07:56,964 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:07:56,964 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-k7qfxna6
-2022-08-26 14:07:56,964 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:57,218 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40119', status: init, memory: 0, processing: 0>
-2022-08-26 14:07:57,485 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40119
-2022-08-26 14:07:57,486 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:57,486 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:41535
-2022-08-26 14:07:57,486 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:57,486 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:41785', status: init, memory: 0, processing: 0>
-2022-08-26 14:07:57,487 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:57,487 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:41785
-2022-08-26 14:07:57,487 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:57,487 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:41535
-2022-08-26 14:07:57,488 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:57,488 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:57,493 - distributed.scheduler - INFO - Receive client connection: Client-28dd98bb-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:57,494 - distributed.core - INFO - Starting established connection
-PASSED2022-08-26 14:07:57,505 - distributed.scheduler - INFO - Remove client Client-28dd98bb-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:57,505 - distributed.scheduler - INFO - Remove client Client-28dd98bb-2583-11ed-a99d-00d861bc4509
-
-distributed/tests/test_publish.py::test_publish_multiple_datasets 2022-08-26 14:07:57,518 - distributed.scheduler - INFO - State start
-2022-08-26 14:07:57,520 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:07:57,520 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:33473
-2022-08-26 14:07:57,520 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:37525
-2022-08-26 14:07:57,520 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-gljaohth', purging
-2022-08-26 14:07:57,521 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-k7qfxna6', purging
-2022-08-26 14:07:57,525 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37537
-2022-08-26 14:07:57,525 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37537
-2022-08-26 14:07:57,525 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:07:57,525 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40913
-2022-08-26 14:07:57,525 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33473
-2022-08-26 14:07:57,525 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:57,525 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:07:57,525 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:07:57,525 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-nk3ovjmg
-2022-08-26 14:07:57,525 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:57,526 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34113
-2022-08-26 14:07:57,526 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34113
-2022-08-26 14:07:57,526 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:07:57,526 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34581
-2022-08-26 14:07:57,526 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33473
-2022-08-26 14:07:57,526 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:57,526 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:07:57,526 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:07:57,526 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-tmdrmx4u
-2022-08-26 14:07:57,526 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:57,529 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37537', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:07:57,529 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37537
-2022-08-26 14:07:57,529 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:57,530 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:34113', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:07:57,530 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:34113
-2022-08-26 14:07:57,530 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:57,530 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33473
-2022-08-26 14:07:57,530 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:57,530 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33473
-2022-08-26 14:07:57,530 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:57,531 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:57,531 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:57,544 - distributed.scheduler - INFO - Receive client connection: Client-28e5685b-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:57,545 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:57,556 - distributed.scheduler - INFO - Remove client Client-28e5685b-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:57,556 - distributed.scheduler - INFO - Remove client Client-28e5685b-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:57,557 - distributed.scheduler - INFO - Close client connection: Client-28e5685b-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:57,557 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:37537
-2022-08-26 14:07:57,557 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:34113
-2022-08-26 14:07:57,558 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:37537', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:07:57,558 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:37537
-2022-08-26 14:07:57,558 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:34113', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:07:57,558 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:34113
-2022-08-26 14:07:57,559 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:07:57,559 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-89a5fedd-84ee-433c-8f0c-a144d09cf37a Address tcp://127.0.0.1:37537 Status: Status.closing
-2022-08-26 14:07:57,559 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-da5cb381-e463-4237-a6fa-f7f340a82db3 Address tcp://127.0.0.1:34113 Status: Status.closing
-2022-08-26 14:07:57,560 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:07:57,560 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:07:57,763 - distributed.utils_perf - WARNING - full garbage collections took 74% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_publish.py::test_unpublish_multiple_datasets_sync 2022-08-26 14:07:58,617 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:07:58,620 - distributed.scheduler - INFO - State start
-2022-08-26 14:07:58,623 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:07:58,623 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:42345
-2022-08-26 14:07:58,623 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:07:58,632 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35501
-2022-08-26 14:07:58,632 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35501
-2022-08-26 14:07:58,632 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41857
-2022-08-26 14:07:58,632 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:42345
-2022-08-26 14:07:58,632 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:58,632 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:07:58,632 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:07:58,632 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-gtoj7wm_
-2022-08-26 14:07:58,632 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:58,672 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43869
-2022-08-26 14:07:58,672 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43869
-2022-08-26 14:07:58,672 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40177
-2022-08-26 14:07:58,672 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:42345
-2022-08-26 14:07:58,673 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:58,673 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:07:58,673 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:07:58,673 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-hn6ppnzr
-2022-08-26 14:07:58,673 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:58,917 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:35501', status: init, memory: 0, processing: 0>
-2022-08-26 14:07:59,176 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:35501
-2022-08-26 14:07:59,177 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:59,177 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:42345
-2022-08-26 14:07:59,177 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:59,177 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:43869', status: init, memory: 0, processing: 0>
-2022-08-26 14:07:59,178 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:59,178 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:43869
-2022-08-26 14:07:59,178 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:59,178 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:42345
-2022-08-26 14:07:59,178 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:59,179 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:59,184 - distributed.scheduler - INFO - Receive client connection: Client-29df8bad-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:59,184 - distributed.core - INFO - Starting established connection
-PASSED2022-08-26 14:07:59,195 - distributed.scheduler - INFO - Remove client Client-29df8bad-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:59,195 - distributed.scheduler - INFO - Remove client Client-29df8bad-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:59,196 - distributed.scheduler - INFO - Close client connection: Client-29df8bad-2583-11ed-a99d-00d861bc4509
-
-distributed/tests/test_publish.py::test_publish_bag 2022-08-26 14:07:59,208 - distributed.scheduler - INFO - State start
-2022-08-26 14:07:59,210 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:07:59,210 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:42445
-2022-08-26 14:07:59,210 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:39863
-2022-08-26 14:07:59,210 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-gtoj7wm_', purging
-2022-08-26 14:07:59,211 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-hn6ppnzr', purging
-2022-08-26 14:07:59,215 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43171
-2022-08-26 14:07:59,215 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43171
-2022-08-26 14:07:59,215 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:07:59,215 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35357
-2022-08-26 14:07:59,215 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:42445
-2022-08-26 14:07:59,215 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:59,215 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:07:59,215 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:07:59,215 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-nq9e2jdh
-2022-08-26 14:07:59,215 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:59,216 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40497
-2022-08-26 14:07:59,216 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40497
-2022-08-26 14:07:59,216 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:07:59,216 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33537
-2022-08-26 14:07:59,216 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:42445
-2022-08-26 14:07:59,216 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:59,216 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:07:59,216 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:07:59,216 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-gm_rl8_c
-2022-08-26 14:07:59,216 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:59,219 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:43171', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:07:59,219 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:43171
-2022-08-26 14:07:59,219 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:59,220 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40497', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:07:59,220 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40497
-2022-08-26 14:07:59,220 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:59,220 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:42445
-2022-08-26 14:07:59,220 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:59,220 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:42445
-2022-08-26 14:07:59,220 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:07:59,221 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:59,221 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:59,234 - distributed.scheduler - INFO - Receive client connection: Client-29e75095-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:59,235 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:59,238 - distributed.scheduler - INFO - Receive client connection: Client-29e7d69e-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:59,238 - distributed.core - INFO - Starting established connection
-2022-08-26 14:07:59,261 - distributed.scheduler - INFO - Remove client Client-29e75095-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:59,261 - distributed.scheduler - INFO - Remove client Client-29e75095-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:59,261 - distributed.scheduler - INFO - Close client connection: Client-29e75095-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:59,272 - distributed.scheduler - INFO - Remove client Client-29e7d69e-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:59,272 - distributed.scheduler - INFO - Remove client Client-29e7d69e-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:59,272 - distributed.scheduler - INFO - Close client connection: Client-29e7d69e-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:07:59,273 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:43171
-2022-08-26 14:07:59,273 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:40497
-2022-08-26 14:07:59,274 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:43171', name: 0, status: closing, memory: 1, processing: 0>
-2022-08-26 14:07:59,274 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:43171
-2022-08-26 14:07:59,274 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:40497', name: 1, status: closing, memory: 3, processing: 0>
-2022-08-26 14:07:59,275 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:40497
-2022-08-26 14:07:59,275 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:07:59,275 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-f6272872-4e02-4b2a-bdd4-1e42e307045c Address tcp://127.0.0.1:43171 Status: Status.closing
-2022-08-26 14:07:59,275 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-f47f14a7-6609-4227-ae04-c5c007c02b16 Address tcp://127.0.0.1:40497 Status: Status.closing
-2022-08-26 14:07:59,276 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:07:59,277 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:07:59,479 - distributed.utils_perf - WARNING - full garbage collections took 73% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_publish.py::test_datasets_setitem 2022-08-26 14:08:00,344 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:08:00,346 - distributed.scheduler - INFO - State start
-2022-08-26 14:08:00,350 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:08:00,350 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:38647
-2022-08-26 14:08:00,350 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:08:00,360 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34825
-2022-08-26 14:08:00,360 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34825
-2022-08-26 14:08:00,360 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45327
-2022-08-26 14:08:00,360 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:38647
-2022-08-26 14:08:00,360 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:00,360 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:00,360 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:00,361 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-p8_i2kku
-2022-08-26 14:08:00,361 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:00,386 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43775
-2022-08-26 14:08:00,386 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43775
-2022-08-26 14:08:00,386 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45751
-2022-08-26 14:08:00,386 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:38647
-2022-08-26 14:08:00,386 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:00,386 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:00,386 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:00,386 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-3okg3wol
-2022-08-26 14:08:00,386 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:00,647 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:34825', status: init, memory: 0, processing: 0>
-2022-08-26 14:08:00,905 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:34825
-2022-08-26 14:08:00,905 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:00,905 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:38647
-2022-08-26 14:08:00,905 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:00,905 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:43775', status: init, memory: 0, processing: 0>
-2022-08-26 14:08:00,906 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:43775
-2022-08-26 14:08:00,906 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:00,906 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:00,906 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:38647
-2022-08-26 14:08:00,906 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:00,907 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:00,912 - distributed.scheduler - INFO - Receive client connection: Client-2ae73b3b-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:00,912 - distributed.core - INFO - Starting established connection
-PASSED2022-08-26 14:08:00,923 - distributed.scheduler - INFO - Remove client Client-2ae73b3b-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:00,923 - distributed.scheduler - INFO - Remove client Client-2ae73b3b-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:00,923 - distributed.scheduler - INFO - Close client connection: Client-2ae73b3b-2583-11ed-a99d-00d861bc4509
-
-distributed/tests/test_publish.py::test_datasets_getitem 2022-08-26 14:08:01,775 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:08:01,777 - distributed.scheduler - INFO - State start
-2022-08-26 14:08:01,780 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:08:01,780 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:42929
-2022-08-26 14:08:01,780 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:08:01,803 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-3okg3wol', purging
-2022-08-26 14:08:01,803 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-p8_i2kku', purging
-2022-08-26 14:08:01,809 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35055
-2022-08-26 14:08:01,809 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35055
-2022-08-26 14:08:01,810 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46187
-2022-08-26 14:08:01,810 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:42929
-2022-08-26 14:08:01,810 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:01,810 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:01,810 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:01,810 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-9xc2nypx
-2022-08-26 14:08:01,810 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:01,841 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42243
-2022-08-26 14:08:01,841 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42243
-2022-08-26 14:08:01,841 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39583
-2022-08-26 14:08:01,841 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:42929
-2022-08-26 14:08:01,841 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:01,841 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:01,841 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:01,841 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-xa8_mgwh
-2022-08-26 14:08:01,841 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:02,095 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:35055', status: init, memory: 0, processing: 0>
-2022-08-26 14:08:02,351 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:35055
-2022-08-26 14:08:02,352 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:02,352 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:42929
-2022-08-26 14:08:02,352 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:02,352 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42243', status: init, memory: 0, processing: 0>
-2022-08-26 14:08:02,353 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42243
-2022-08-26 14:08:02,353 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:02,353 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:02,353 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:42929
-2022-08-26 14:08:02,353 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:02,354 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:02,359 - distributed.scheduler - INFO - Receive client connection: Client-2bc404be-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:02,359 - distributed.core - INFO - Starting established connection
-PASSED2022-08-26 14:08:02,370 - distributed.scheduler - INFO - Remove client Client-2bc404be-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:02,370 - distributed.scheduler - INFO - Remove client Client-2bc404be-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:02,370 - distributed.scheduler - INFO - Close client connection: Client-2bc404be-2583-11ed-a99d-00d861bc4509
-
-distributed/tests/test_publish.py::test_datasets_getitem_default 2022-08-26 14:08:03,237 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:08:03,240 - distributed.scheduler - INFO - State start
-2022-08-26 14:08:03,243 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:08:03,243 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:40787
-2022-08-26 14:08:03,243 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:08:03,245 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-xa8_mgwh', purging
-2022-08-26 14:08:03,245 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-9xc2nypx', purging
-2022-08-26 14:08:03,251 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34285
-2022-08-26 14:08:03,251 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34285
-2022-08-26 14:08:03,251 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34783
-2022-08-26 14:08:03,251 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:40787
-2022-08-26 14:08:03,251 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:03,251 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:03,251 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:03,251 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-nyvqkfp2
-2022-08-26 14:08:03,251 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:03,294 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35845
-2022-08-26 14:08:03,294 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35845
-2022-08-26 14:08:03,294 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35137
-2022-08-26 14:08:03,294 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:40787
-2022-08-26 14:08:03,294 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:03,295 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:03,295 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:03,295 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-xieurugs
-2022-08-26 14:08:03,295 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:03,537 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:34285', status: init, memory: 0, processing: 0>
-2022-08-26 14:08:03,796 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:34285
-2022-08-26 14:08:03,797 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:03,797 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:40787
-2022-08-26 14:08:03,797 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:03,797 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:35845', status: init, memory: 0, processing: 0>
-2022-08-26 14:08:03,798 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:35845
-2022-08-26 14:08:03,798 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:03,798 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:03,798 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:40787
-2022-08-26 14:08:03,798 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:03,799 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:03,803 - distributed.scheduler - INFO - Receive client connection: Client-2ca08220-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:03,804 - distributed.core - INFO - Starting established connection
-PASSED2022-08-26 14:08:03,814 - distributed.scheduler - INFO - Remove client Client-2ca08220-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:03,815 - distributed.scheduler - INFO - Remove client Client-2ca08220-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:03,815 - distributed.scheduler - INFO - Close client connection: Client-2ca08220-2583-11ed-a99d-00d861bc4509
-
-distributed/tests/test_publish.py::test_datasets_delitem 2022-08-26 14:08:04,680 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:08:04,683 - distributed.scheduler - INFO - State start
-2022-08-26 14:08:04,686 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:08:04,686 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:42225
-2022-08-26 14:08:04,686 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:08:04,702 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-nyvqkfp2', purging
-2022-08-26 14:08:04,702 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-xieurugs', purging
-2022-08-26 14:08:04,708 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40867
-2022-08-26 14:08:04,708 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40867
-2022-08-26 14:08:04,708 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46501
-2022-08-26 14:08:04,708 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:42225
-2022-08-26 14:08:04,708 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:04,708 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:04,708 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:04,708 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-u51xrpoi
-2022-08-26 14:08:04,708 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:04,737 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:44845
-2022-08-26 14:08:04,737 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:44845
-2022-08-26 14:08:04,737 - distributed.worker - INFO -          dashboard at:            127.0.0.1:37129
-2022-08-26 14:08:04,737 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:42225
-2022-08-26 14:08:04,738 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:04,738 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:04,738 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:04,738 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-0t1pm30h
-2022-08-26 14:08:04,738 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:04,994 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40867', status: init, memory: 0, processing: 0>
-2022-08-26 14:08:05,254 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40867
-2022-08-26 14:08:05,254 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:05,254 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:42225
-2022-08-26 14:08:05,254 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:05,255 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:44845', status: init, memory: 0, processing: 0>
-2022-08-26 14:08:05,255 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:05,255 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:44845
-2022-08-26 14:08:05,255 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:05,255 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:42225
-2022-08-26 14:08:05,256 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:05,256 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:05,261 - distributed.scheduler - INFO - Receive client connection: Client-2d7ee6f7-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:05,261 - distributed.core - INFO - Starting established connection
-PASSED2022-08-26 14:08:05,272 - distributed.scheduler - INFO - Remove client Client-2d7ee6f7-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:05,272 - distributed.scheduler - INFO - Remove client Client-2d7ee6f7-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:05,273 - distributed.scheduler - INFO - Close client connection: Client-2d7ee6f7-2583-11ed-a99d-00d861bc4509
-
-distributed/tests/test_publish.py::test_datasets_keys 2022-08-26 14:08:06,138 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:08:06,140 - distributed.scheduler - INFO - State start
-2022-08-26 14:08:06,143 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:08:06,143 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:33537
-2022-08-26 14:08:06,143 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:08:06,155 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-0t1pm30h', purging
-2022-08-26 14:08:06,155 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-u51xrpoi', purging
-2022-08-26 14:08:06,161 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33083
-2022-08-26 14:08:06,161 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33083
-2022-08-26 14:08:06,161 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33275
-2022-08-26 14:08:06,161 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33537
-2022-08-26 14:08:06,161 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:06,161 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:06,161 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:06,161 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-kkysmnsr
-2022-08-26 14:08:06,161 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:06,197 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45941
-2022-08-26 14:08:06,197 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45941
-2022-08-26 14:08:06,197 - distributed.worker - INFO -          dashboard at:            127.0.0.1:38761
-2022-08-26 14:08:06,197 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33537
-2022-08-26 14:08:06,197 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:06,197 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:06,197 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:06,197 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ywua5res
-2022-08-26 14:08:06,197 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:06,442 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33083', status: init, memory: 0, processing: 0>
-2022-08-26 14:08:06,699 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33083
-2022-08-26 14:08:06,699 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:06,699 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33537
-2022-08-26 14:08:06,700 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:06,700 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45941', status: init, memory: 0, processing: 0>
-2022-08-26 14:08:06,700 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45941
-2022-08-26 14:08:06,700 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:06,700 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:06,701 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33537
-2022-08-26 14:08:06,701 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:06,701 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:06,707 - distributed.scheduler - INFO - Receive client connection: Client-2e5b6972-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:06,707 - distributed.core - INFO - Starting established connection
-PASSED2022-08-26 14:08:06,718 - distributed.scheduler - INFO - Remove client Client-2e5b6972-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:06,718 - distributed.scheduler - INFO - Remove client Client-2e5b6972-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:06,718 - distributed.scheduler - INFO - Close client connection: Client-2e5b6972-2583-11ed-a99d-00d861bc4509
-
-distributed/tests/test_publish.py::test_datasets_contains 2022-08-26 14:08:07,594 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:08:07,596 - distributed.scheduler - INFO - State start
-2022-08-26 14:08:07,600 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:08:07,600 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:38673
-2022-08-26 14:08:07,600 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:08:07,602 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-ywua5res', purging
-2022-08-26 14:08:07,602 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-kkysmnsr', purging
-2022-08-26 14:08:07,608 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43481
-2022-08-26 14:08:07,608 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43481
-2022-08-26 14:08:07,608 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33561
-2022-08-26 14:08:07,608 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:38673
-2022-08-26 14:08:07,608 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:07,608 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:07,608 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:07,608 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-zy5u5ie1
-2022-08-26 14:08:07,609 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:07,651 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42333
-2022-08-26 14:08:07,651 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42333
-2022-08-26 14:08:07,651 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39323
-2022-08-26 14:08:07,651 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:38673
-2022-08-26 14:08:07,651 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:07,651 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:07,651 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:07,651 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-s8rq0828
-2022-08-26 14:08:07,652 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:07,894 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:43481', status: init, memory: 0, processing: 0>
-2022-08-26 14:08:08,155 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:43481
-2022-08-26 14:08:08,155 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:08,155 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:38673
-2022-08-26 14:08:08,155 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:08,155 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42333', status: init, memory: 0, processing: 0>
-2022-08-26 14:08:08,156 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42333
-2022-08-26 14:08:08,156 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:08,156 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:08,156 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:38673
-2022-08-26 14:08:08,156 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:08,157 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:08,162 - distributed.scheduler - INFO - Receive client connection: Client-2f397ff4-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:08,162 - distributed.core - INFO - Starting established connection
-PASSED2022-08-26 14:08:08,173 - distributed.scheduler - INFO - Remove client Client-2f397ff4-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:08,174 - distributed.scheduler - INFO - Remove client Client-2f397ff4-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:08,174 - distributed.scheduler - INFO - Close client connection: Client-2f397ff4-2583-11ed-a99d-00d861bc4509
-
-distributed/tests/test_publish.py::test_datasets_republish 2022-08-26 14:08:09,037 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:08:09,039 - distributed.scheduler - INFO - State start
-2022-08-26 14:08:09,042 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:08:09,042 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:42215
-2022-08-26 14:08:09,042 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:08:09,054 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-zy5u5ie1', purging
-2022-08-26 14:08:09,055 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-s8rq0828', purging
-2022-08-26 14:08:09,061 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40145
-2022-08-26 14:08:09,061 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40145
-2022-08-26 14:08:09,061 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45801
-2022-08-26 14:08:09,061 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:42215
-2022-08-26 14:08:09,061 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:09,061 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:09,061 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:09,061 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-oxx48wda
-2022-08-26 14:08:09,061 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:09,100 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38453
-2022-08-26 14:08:09,100 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38453
-2022-08-26 14:08:09,100 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33155
-2022-08-26 14:08:09,100 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:42215
-2022-08-26 14:08:09,100 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:09,100 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:09,100 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:09,100 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-0scsjp84
-2022-08-26 14:08:09,100 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:09,342 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40145', status: init, memory: 0, processing: 0>
-2022-08-26 14:08:09,602 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40145
-2022-08-26 14:08:09,603 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:09,603 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:42215
-2022-08-26 14:08:09,603 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:09,604 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38453', status: init, memory: 0, processing: 0>
-2022-08-26 14:08:09,604 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:09,604 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38453
-2022-08-26 14:08:09,604 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:09,604 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:42215
-2022-08-26 14:08:09,605 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:09,605 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:09,610 - distributed.scheduler - INFO - Receive client connection: Client-30167a61-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:09,610 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:09,687 - distributed.core - ERROR - 'Dataset key already exists'
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/utils.py", line 805, in wrapper
-    return func(*args, **kwargs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/publish.py", line 35, in put
-    raise KeyError("Dataset %s already exists" % name)
-KeyError: 'Dataset key already exists'
-2022-08-26 14:08:09,687 - distributed.core - ERROR - Exception while handling op publish_put
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 768, in _handle_comm
-    result = handler(**msg)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/utils.py", line 805, in wrapper
-    return func(*args, **kwargs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/publish.py", line 35, in put
-    raise KeyError("Dataset %s already exists" % name)
-KeyError: 'Dataset key already exists'
-PASSED2022-08-26 14:08:09,692 - distributed.scheduler - INFO - Remove client Client-30167a61-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:09,692 - distributed.scheduler - INFO - Remove client Client-30167a61-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:09,692 - distributed.scheduler - INFO - Close client connection: Client-30167a61-2583-11ed-a99d-00d861bc4509
-
-distributed/tests/test_publish.py::test_datasets_iter 2022-08-26 14:08:10,568 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:08:10,570 - distributed.scheduler - INFO - State start
-2022-08-26 14:08:10,573 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:08:10,574 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:37287
-2022-08-26 14:08:10,574 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:08:10,576 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-oxx48wda', purging
-2022-08-26 14:08:10,576 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-0scsjp84', purging
-2022-08-26 14:08:10,582 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:44405
-2022-08-26 14:08:10,582 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:44405
-2022-08-26 14:08:10,582 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42099
-2022-08-26 14:08:10,582 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37287
-2022-08-26 14:08:10,582 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:10,582 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:10,582 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:10,582 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-wo54xh0u
-2022-08-26 14:08:10,582 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:10,622 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38903
-2022-08-26 14:08:10,622 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38903
-2022-08-26 14:08:10,622 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40451
-2022-08-26 14:08:10,622 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37287
-2022-08-26 14:08:10,622 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:10,622 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:10,622 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:10,622 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ev986hv2
-2022-08-26 14:08:10,622 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:10,861 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:44405', status: init, memory: 0, processing: 0>
-2022-08-26 14:08:11,120 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:44405
-2022-08-26 14:08:11,120 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:11,120 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37287
-2022-08-26 14:08:11,120 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:11,121 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38903', status: init, memory: 0, processing: 0>
-2022-08-26 14:08:11,122 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38903
-2022-08-26 14:08:11,122 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:11,122 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:11,122 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37287
-2022-08-26 14:08:11,122 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:11,123 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:11,127 - distributed.scheduler - INFO - Receive client connection: Client-30fdfea9-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:11,128 - distributed.core - INFO - Starting established connection
-PASSED2022-08-26 14:08:11,139 - distributed.scheduler - INFO - Remove client Client-30fdfea9-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:11,139 - distributed.scheduler - INFO - Remove client Client-30fdfea9-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:11,140 - distributed.scheduler - INFO - Close client connection: Client-30fdfea9-2583-11ed-a99d-00d861bc4509
-
-distributed/tests/test_publish.py::test_datasets_async 2022-08-26 14:08:11,154 - distributed.scheduler - INFO - State start
-2022-08-26 14:08:11,156 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:08:11,156 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:42425
-2022-08-26 14:08:11,156 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:41957
-2022-08-26 14:08:11,156 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-ev986hv2', purging
-2022-08-26 14:08:11,157 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-wo54xh0u', purging
-2022-08-26 14:08:11,161 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39495
-2022-08-26 14:08:11,161 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39495
-2022-08-26 14:08:11,161 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:08:11,161 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34939
-2022-08-26 14:08:11,161 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:42425
-2022-08-26 14:08:11,161 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:11,161 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:11,161 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:11,161 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-6nzr4_lw
-2022-08-26 14:08:11,161 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:11,162 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36393
-2022-08-26 14:08:11,162 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36393
-2022-08-26 14:08:11,162 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:08:11,162 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33567
-2022-08-26 14:08:11,162 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:42425
-2022-08-26 14:08:11,162 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:11,162 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:08:11,162 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:11,162 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-anmkhwwq
-2022-08-26 14:08:11,162 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:11,165 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39495', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:11,165 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39495
-2022-08-26 14:08:11,165 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:11,166 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36393', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:11,166 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36393
-2022-08-26 14:08:11,166 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:11,166 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:42425
-2022-08-26 14:08:11,166 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:11,167 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:42425
-2022-08-26 14:08:11,167 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:11,168 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:11,168 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:11,182 - distributed.scheduler - INFO - Receive client connection: Client-31065b98-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:11,182 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:11,193 - distributed.scheduler - INFO - Remove client Client-31065b98-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:11,193 - distributed.scheduler - INFO - Remove client Client-31065b98-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:11,194 - distributed.scheduler - INFO - Close client connection: Client-31065b98-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:11,194 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:39495
-2022-08-26 14:08:11,194 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36393
-2022-08-26 14:08:11,195 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:39495', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:11,195 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:39495
-2022-08-26 14:08:11,195 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:36393', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:11,196 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:36393
-2022-08-26 14:08:11,196 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:08:11,196 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-f8dfb672-e7fa-4dfb-9843-dcf9a09bdec6 Address tcp://127.0.0.1:39495 Status: Status.closing
-2022-08-26 14:08:11,196 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-8d302d50-1002-4a2f-afd1-0824c8083f09 Address tcp://127.0.0.1:36393 Status: Status.closing
-2022-08-26 14:08:11,197 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:08:11,197 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:08:11,400 - distributed.utils_perf - WARNING - full garbage collections took 74% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_publish.py::test_pickle_safe 2022-08-26 14:08:11,406 - distributed.scheduler - INFO - State start
-2022-08-26 14:08:11,408 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:08:11,408 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:45603
-2022-08-26 14:08:11,408 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:36163
-2022-08-26 14:08:11,412 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35403
-2022-08-26 14:08:11,412 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35403
-2022-08-26 14:08:11,412 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:08:11,412 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36931
-2022-08-26 14:08:11,412 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:45603
-2022-08-26 14:08:11,412 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:11,412 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:11,412 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:11,413 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-2ef5cq81
-2022-08-26 14:08:11,413 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:11,413 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:41331
-2022-08-26 14:08:11,413 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:41331
-2022-08-26 14:08:11,413 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:08:11,413 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35939
-2022-08-26 14:08:11,413 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:45603
-2022-08-26 14:08:11,413 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:11,413 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:08:11,413 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:11,413 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-3dm_slr1
-2022-08-26 14:08:11,414 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:11,416 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:35403', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:11,417 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:35403
-2022-08-26 14:08:11,417 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:11,417 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:41331', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:11,417 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:41331
-2022-08-26 14:08:11,417 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:11,418 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:45603
-2022-08-26 14:08:11,418 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:11,418 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:45603
-2022-08-26 14:08:11,418 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:11,418 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:11,418 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:11,432 - distributed.scheduler - INFO - Receive client connection: Client-312c7d13-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:11,432 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:11,436 - distributed.scheduler - INFO - Receive client connection: Client-312d0872-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:11,436 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:11,438 - distributed.protocol.core - CRITICAL - Failed to Serialize
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/core.py", line 109, in dumps
-    frames[0] = msgpack.dumps(msg, default=_encode_default, use_bin_type=True)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/msgpack/__init__.py", line 38, in packb
-    return Packer(**kwargs).pack(o)
-  File "msgpack/_packer.pyx", line 294, in msgpack._cmsgpack.Packer.pack
-  File "msgpack/_packer.pyx", line 300, in msgpack._cmsgpack.Packer.pack
-  File "msgpack/_packer.pyx", line 297, in msgpack._cmsgpack.Packer.pack
-  File "msgpack/_packer.pyx", line 231, in msgpack._cmsgpack.Packer._pack
-  File "msgpack/_packer.pyx", line 285, in msgpack._cmsgpack.Packer._pack
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/core.py", line 100, in _encode_default
-    frames.extend(create_serialized_sub_frames(obj))
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/core.py", line 60, in create_serialized_sub_frames
-    sub_header, sub_frames = serialize_and_split(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/serialize.py", line 444, in serialize_and_split
-    header, frames = serialize(x, serializers, on_error, context)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/serialize.py", line 266, in serialize
-    return serialize(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/serialize.py", line 366, in serialize
-    raise TypeError(msg, str(x)[:10000])
-TypeError: ('Could not serialize object of type function', '<function test_pickle_safe.<locals>.<lambda> at 0x564041ac6d20>')
-2022-08-26 14:08:11,438 - distributed.comm.utils - INFO - Unserializable Message: {'op': 'publish_put', 'keys': [], 'name': 'y', 'data': <Serialize: <function test_pickle_safe.<locals>.<lambda> at 0x564041ac6d20>>, 'override': False, 'client': 'Client-312d0872-2583-11ed-a99d-00d861bc4509', 'reply': True, 'serializers': ['msgpack']}
-2022-08-26 14:08:11,438 - distributed.comm.utils - ERROR - ('Could not serialize object of type function', '<function test_pickle_safe.<locals>.<lambda> at 0x564041ac6d20>')
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/utils.py", line 55, in _to_frames
-    return list(protocol.dumps(msg, **kwargs))
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/core.py", line 109, in dumps
-    frames[0] = msgpack.dumps(msg, default=_encode_default, use_bin_type=True)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/msgpack/__init__.py", line 38, in packb
-    return Packer(**kwargs).pack(o)
-  File "msgpack/_packer.pyx", line 294, in msgpack._cmsgpack.Packer.pack
-  File "msgpack/_packer.pyx", line 300, in msgpack._cmsgpack.Packer.pack
-  File "msgpack/_packer.pyx", line 297, in msgpack._cmsgpack.Packer.pack
-  File "msgpack/_packer.pyx", line 231, in msgpack._cmsgpack.Packer._pack
-  File "msgpack/_packer.pyx", line 285, in msgpack._cmsgpack.Packer._pack
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/core.py", line 100, in _encode_default
-    frames.extend(create_serialized_sub_frames(obj))
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/core.py", line 60, in create_serialized_sub_frames
-    sub_header, sub_frames = serialize_and_split(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/serialize.py", line 444, in serialize_and_split
-    header, frames = serialize(x, serializers, on_error, context)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/serialize.py", line 266, in serialize
-    return serialize(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/serialize.py", line 366, in serialize
-    raise TypeError(msg, str(x)[:10000])
-TypeError: ('Could not serialize object of type function', '<function test_pickle_safe.<locals>.<lambda> at 0x564041ac6d20>')
-2022-08-26 14:08:11,440 - distributed.protocol.core - CRITICAL - Failed to deserialize
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/core.py", line 158, in loads
-    return msgpack.loads(
-  File "msgpack/_unpacker.pyx", line 194, in msgpack._cmsgpack.unpackb
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/core.py", line 138, in _decode_default
-    return merge_and_deserialize(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/serialize.py", line 497, in merge_and_deserialize
-    return deserialize(header, merged_frames, deserializers=deserializers)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/serialize.py", line 421, in deserialize
-    raise TypeError(
-TypeError: Data serialized with pickle but only able to deserialize data with ['msgpack']
-2022-08-26 14:08:11,447 - distributed.scheduler - INFO - Remove client Client-312d0872-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:11,447 - distributed.scheduler - INFO - Remove client Client-312d0872-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:11,448 - distributed.scheduler - INFO - Close client connection: Client-312d0872-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:11,448 - distributed.scheduler - INFO - Remove client Client-312c7d13-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:11,449 - distributed.scheduler - INFO - Remove client Client-312c7d13-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:11,449 - distributed.scheduler - INFO - Close client connection: Client-312c7d13-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:11,449 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:35403
-2022-08-26 14:08:11,449 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:41331
-2022-08-26 14:08:11,450 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:35403', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:11,450 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:35403
-2022-08-26 14:08:11,451 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:41331', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:11,451 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:41331
-2022-08-26 14:08:11,451 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:08:11,451 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-851433c7-e9b4-45c8-98c9-0581388420ea Address tcp://127.0.0.1:35403 Status: Status.closing
-2022-08-26 14:08:11,451 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-ba3539a2-837e-4fcd-b4b9-d60c067036e5 Address tcp://127.0.0.1:41331 Status: Status.closing
-2022-08-26 14:08:11,452 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:08:11,452 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:08:11,654 - distributed.utils_perf - WARNING - full garbage collections took 74% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_publish.py::test_deserialize_client 2022-08-26 14:08:11,660 - distributed.scheduler - INFO - State start
-2022-08-26 14:08:11,662 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:08:11,662 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:45913
-2022-08-26 14:08:11,662 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:46065
-2022-08-26 14:08:11,666 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36017
-2022-08-26 14:08:11,667 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36017
-2022-08-26 14:08:11,667 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:08:11,667 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46783
-2022-08-26 14:08:11,667 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:45913
-2022-08-26 14:08:11,667 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:11,667 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:11,667 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:11,667 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-4i4b_cbj
-2022-08-26 14:08:11,667 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:11,667 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43305
-2022-08-26 14:08:11,667 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43305
-2022-08-26 14:08:11,667 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:08:11,668 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40209
-2022-08-26 14:08:11,668 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:45913
-2022-08-26 14:08:11,668 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:11,668 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:08:11,668 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:11,668 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-09jp0qb4
-2022-08-26 14:08:11,668 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:11,671 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36017', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:11,671 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36017
-2022-08-26 14:08:11,671 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:11,671 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:43305', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:11,671 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:43305
-2022-08-26 14:08:11,672 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:11,672 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:45913
-2022-08-26 14:08:11,672 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:11,672 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:45913
-2022-08-26 14:08:11,672 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:11,672 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:11,672 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:11,686 - distributed.scheduler - INFO - Receive client connection: Client-31534152-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:11,686 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:11,693 - distributed.scheduler - INFO - Receive client connection: Client-31545f68-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:11,694 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:11,705 - distributed.scheduler - INFO - Remove client Client-31545f68-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:11,705 - distributed.scheduler - INFO - Remove client Client-31545f68-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:11,705 - distributed.batched - INFO - Batched Comm Closed <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:45913 remote=tcp://127.0.0.1:34164>
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/batched.py", line 115, in _background_send
-    nbytes = yield coro
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/tornado/gen.py", line 769, in run
-    value = future.result()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 269, in write
-    raise CommClosedError()
-distributed.comm.core.CommClosedError
-2022-08-26 14:08:11,705 - distributed.scheduler - INFO - Close client connection: Client-31545f68-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:11,709 - distributed.scheduler - INFO - Receive client connection: Client-3156ae77-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:11,709 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:11,721 - distributed.scheduler - INFO - Remove client Client-3156ae77-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:11,722 - distributed.scheduler - INFO - Remove client Client-3156ae77-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:11,722 - distributed.batched - INFO - Batched Comm Closed <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:45913 remote=tcp://127.0.0.1:34184>
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/batched.py", line 115, in _background_send
-    nbytes = yield coro
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/tornado/gen.py", line 769, in run
-    value = future.result()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 269, in write
-    raise CommClosedError()
-distributed.comm.core.CommClosedError
-2022-08-26 14:08:11,722 - distributed.scheduler - INFO - Close client connection: Client-3156ae77-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:11,732 - distributed.scheduler - INFO - Remove client Client-31534152-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:11,732 - distributed.scheduler - INFO - Remove client Client-31534152-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:11,732 - distributed.scheduler - INFO - Close client connection: Client-31534152-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:11,733 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36017
-2022-08-26 14:08:11,733 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:43305
-2022-08-26 14:08:11,734 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:36017', name: 0, status: closing, memory: 1, processing: 0>
-2022-08-26 14:08:11,734 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:36017
-2022-08-26 14:08:11,734 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:43305', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:11,734 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:43305
-2022-08-26 14:08:11,735 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:08:11,735 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-1e27020a-8b09-4b1e-b489-764587771a05 Address tcp://127.0.0.1:36017 Status: Status.closing
-2022-08-26 14:08:11,735 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-939b1089-8d81-494d-b936-79a161135cad Address tcp://127.0.0.1:43305 Status: Status.closing
-2022-08-26 14:08:11,735 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:08:11,736 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:08:11,938 - distributed.utils_perf - WARNING - full garbage collections took 74% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_pubsub.py::test_speed 2022-08-26 14:08:11,944 - distributed.scheduler - INFO - State start
-2022-08-26 14:08:11,946 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:08:11,946 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:33199
-2022-08-26 14:08:11,946 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:46671
-2022-08-26 14:08:11,950 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:44825
-2022-08-26 14:08:11,951 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:44825
-2022-08-26 14:08:11,951 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:08:11,951 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36645
-2022-08-26 14:08:11,951 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33199
-2022-08-26 14:08:11,951 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:11,951 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:11,951 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:11,951 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-wjabw5ms
-2022-08-26 14:08:11,951 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:11,951 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43191
-2022-08-26 14:08:11,951 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43191
-2022-08-26 14:08:11,951 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:08:11,951 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43737
-2022-08-26 14:08:11,952 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33199
-2022-08-26 14:08:11,952 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:11,952 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:08:11,952 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:11,952 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-108i9nf6
-2022-08-26 14:08:11,952 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:11,955 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:44825', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:11,955 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:44825
-2022-08-26 14:08:11,955 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:11,955 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:43191', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:11,955 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:43191
-2022-08-26 14:08:11,956 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:11,956 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33199
-2022-08-26 14:08:11,956 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:11,956 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33199
-2022-08-26 14:08:11,956 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:11,956 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:11,956 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:11,970 - distributed.scheduler - INFO - Receive client connection: Client-317e973f-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:11,970 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:11,995 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:11,997 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:12,110 - tornado.application - ERROR - Exception in callback functools.partial(<bound method PubSubWorkerExtension.cleanup of <distributed.pubsub.PubSubWorkerExtension object at 0x564040c3d4b0>>)
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/tornado/ioloop.py", line 740, in _run_callback
-    ret = callback()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/pubsub.py", line 170, in cleanup
-    del self.publish_to_scheduler[name]
-KeyError: 'b'
-2022-08-26 14:08:12,129 - distributed.scheduler - INFO - Remove client Client-317e973f-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:12,129 - distributed.scheduler - INFO - Remove client Client-317e973f-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:12,129 - distributed.scheduler - INFO - Close client connection: Client-317e973f-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:12,130 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:44825
-2022-08-26 14:08:12,130 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:43191
-2022-08-26 14:08:12,131 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:44825', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:12,131 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:44825
-2022-08-26 14:08:12,131 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:43191', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:12,131 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:43191
-2022-08-26 14:08:12,131 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:08:12,131 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-7d329a2d-c24d-4f48-8f59-46c1effed5fd Address tcp://127.0.0.1:44825 Status: Status.closing
-2022-08-26 14:08:12,132 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-9f3fac08-32a7-4580-867d-a70432c70cdc Address tcp://127.0.0.1:43191 Status: Status.closing
-2022-08-26 14:08:12,133 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:08:12,133 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:08:12,338 - distributed.utils_perf - WARNING - full garbage collections took 73% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_pubsub.py::test_client 2022-08-26 14:08:12,344 - distributed.scheduler - INFO - State start
-2022-08-26 14:08:12,345 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:08:12,345 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:39539
-2022-08-26 14:08:12,346 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:42171
-2022-08-26 14:08:12,349 - distributed.scheduler - INFO - Receive client connection: Client-31b85cb4-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:12,349 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:12,382 - distributed.scheduler - INFO - Remove client Client-31b85cb4-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:12,382 - distributed.scheduler - INFO - Remove client Client-31b85cb4-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:12,382 - distributed.scheduler - INFO - Close client connection: Client-31b85cb4-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:12,382 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:08:12,382 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:08:12,584 - distributed.utils_perf - WARNING - full garbage collections took 74% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_pubsub.py::test_client_worker 2022-08-26 14:08:12,590 - distributed.scheduler - INFO - State start
-2022-08-26 14:08:12,592 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:08:12,592 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:44783
-2022-08-26 14:08:12,592 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:35055
-2022-08-26 14:08:12,596 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:41227
-2022-08-26 14:08:12,596 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:41227
-2022-08-26 14:08:12,596 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:08:12,596 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36991
-2022-08-26 14:08:12,596 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44783
-2022-08-26 14:08:12,596 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:12,596 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:12,597 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:12,597 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-or7lrw6b
-2022-08-26 14:08:12,597 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:12,597 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42753
-2022-08-26 14:08:12,597 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42753
-2022-08-26 14:08:12,597 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:08:12,597 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43023
-2022-08-26 14:08:12,597 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44783
-2022-08-26 14:08:12,597 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:12,597 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:08:12,597 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:12,597 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-q22d8lxd
-2022-08-26 14:08:12,597 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:12,600 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:41227', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:12,601 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:41227
-2022-08-26 14:08:12,601 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:12,601 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42753', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:12,601 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42753
-2022-08-26 14:08:12,601 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:12,602 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44783
-2022-08-26 14:08:12,602 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:12,602 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44783
-2022-08-26 14:08:12,602 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:12,602 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:12,602 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:12,616 - distributed.scheduler - INFO - Receive client connection: Client-31e122a2-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:12,616 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:12,675 - distributed.scheduler - INFO - Remove client Client-31e122a2-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:12,675 - distributed.scheduler - INFO - Remove client Client-31e122a2-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:12,676 - distributed.scheduler - INFO - Close client connection: Client-31e122a2-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:12,676 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:41227
-2022-08-26 14:08:12,676 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42753
-2022-08-26 14:08:12,677 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:41227', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:12,677 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:41227
-2022-08-26 14:08:12,677 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:42753', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:12,677 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:42753
-2022-08-26 14:08:12,677 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:08:12,678 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-8a54e273-1bd5-4cb7-bc5d-376381b522cc Address tcp://127.0.0.1:41227 Status: Status.closing
-2022-08-26 14:08:12,678 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-fda6d720-f0ad-4a47-a32d-a8b4a4d385ea Address tcp://127.0.0.1:42753 Status: Status.closing
-2022-08-26 14:08:12,680 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:08:12,680 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:08:12,885 - distributed.utils_perf - WARNING - full garbage collections took 73% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_pubsub.py::test_timeouts 2022-08-26 14:08:12,890 - distributed.scheduler - INFO - State start
-2022-08-26 14:08:12,892 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:08:12,892 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:46261
-2022-08-26 14:08:12,892 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:42295
-2022-08-26 14:08:12,897 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37765
-2022-08-26 14:08:12,897 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37765
-2022-08-26 14:08:12,897 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:08:12,897 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41403
-2022-08-26 14:08:12,897 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:46261
-2022-08-26 14:08:12,897 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:12,897 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:12,897 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:12,897 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-2babrls_
-2022-08-26 14:08:12,897 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:12,898 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:44045
-2022-08-26 14:08:12,898 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:44045
-2022-08-26 14:08:12,898 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:08:12,898 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35955
-2022-08-26 14:08:12,898 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:46261
-2022-08-26 14:08:12,898 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:12,898 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:08:12,898 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:12,898 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-hyfswpsj
-2022-08-26 14:08:12,898 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:12,901 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37765', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:12,901 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37765
-2022-08-26 14:08:12,901 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:12,901 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:44045', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:12,902 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:44045
-2022-08-26 14:08:12,902 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:12,902 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:46261
-2022-08-26 14:08:12,902 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:12,902 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:46261
-2022-08-26 14:08:12,902 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:12,903 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:12,903 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:12,916 - distributed.scheduler - INFO - Receive client connection: Client-320efba7-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:12,916 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:13,038 - distributed.scheduler - INFO - Remove client Client-320efba7-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:13,038 - distributed.scheduler - INFO - Remove client Client-320efba7-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:13,039 - distributed.scheduler - INFO - Close client connection: Client-320efba7-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:13,039 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:37765
-2022-08-26 14:08:13,039 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:44045
-2022-08-26 14:08:13,040 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:37765', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:13,040 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:37765
-2022-08-26 14:08:13,040 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:44045', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:13,040 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:44045
-2022-08-26 14:08:13,040 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:08:13,041 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-7f7655a6-919c-464a-9836-2810f30c5cc8 Address tcp://127.0.0.1:37765 Status: Status.closing
-2022-08-26 14:08:13,041 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-bc3807b3-f7ac-4993-bde1-495fb9a2ed62 Address tcp://127.0.0.1:44045 Status: Status.closing
-2022-08-26 14:08:13,042 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:08:13,042 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:08:13,245 - distributed.utils_perf - WARNING - full garbage collections took 74% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_pubsub.py::test_repr 2022-08-26 14:08:13,251 - distributed.scheduler - INFO - State start
-2022-08-26 14:08:13,252 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:08:13,252 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:43823
-2022-08-26 14:08:13,252 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:41275
-2022-08-26 14:08:13,257 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36115
-2022-08-26 14:08:13,257 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36115
-2022-08-26 14:08:13,257 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:08:13,257 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36887
-2022-08-26 14:08:13,257 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43823
-2022-08-26 14:08:13,257 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:13,257 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:13,257 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:13,257 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-1ag0roll
-2022-08-26 14:08:13,257 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:13,258 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43949
-2022-08-26 14:08:13,258 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43949
-2022-08-26 14:08:13,258 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:08:13,258 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45383
-2022-08-26 14:08:13,258 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43823
-2022-08-26 14:08:13,258 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:13,258 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:08:13,258 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:13,258 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-y82doxtq
-2022-08-26 14:08:13,258 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:13,261 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36115', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:13,261 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36115
-2022-08-26 14:08:13,261 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:13,262 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:43949', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:13,262 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:43949
-2022-08-26 14:08:13,262 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:13,262 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43823
-2022-08-26 14:08:13,262 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:13,262 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43823
-2022-08-26 14:08:13,262 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:13,263 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:13,263 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:13,276 - distributed.scheduler - INFO - Receive client connection: Client-3245ede3-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:13,277 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:13,288 - distributed.scheduler - INFO - Remove client Client-3245ede3-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:13,288 - distributed.scheduler - INFO - Remove client Client-3245ede3-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:13,288 - distributed.scheduler - INFO - Close client connection: Client-3245ede3-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:13,288 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36115
-2022-08-26 14:08:13,289 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:43949
-2022-08-26 14:08:13,290 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:36115', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:13,290 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:36115
-2022-08-26 14:08:13,290 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:43949', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:13,290 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:43949
-2022-08-26 14:08:13,290 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:08:13,290 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-aac5ba19-2090-479d-b6c1-523d1b639872 Address tcp://127.0.0.1:36115 Status: Status.closing
-2022-08-26 14:08:13,290 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-c841168d-4f25-429b-af08-8a15027b46da Address tcp://127.0.0.1:43949 Status: Status.closing
-2022-08-26 14:08:13,291 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:08:13,291 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:08:13,494 - distributed.utils_perf - WARNING - full garbage collections took 76% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_pubsub.py::test_basic 2022-08-26 14:08:13,500 - distributed.scheduler - INFO - State start
-2022-08-26 14:08:13,501 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:08:13,502 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:42127
-2022-08-26 14:08:13,502 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:45125
-2022-08-26 14:08:13,506 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42473
-2022-08-26 14:08:13,506 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42473
-2022-08-26 14:08:13,506 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:08:13,506 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35763
-2022-08-26 14:08:13,506 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:42127
-2022-08-26 14:08:13,506 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:13,506 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:13,506 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:13,506 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-f33uwkfj
-2022-08-26 14:08:13,506 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:13,507 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:44801
-2022-08-26 14:08:13,507 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:44801
-2022-08-26 14:08:13,507 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:08:13,507 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41449
-2022-08-26 14:08:13,507 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:42127
-2022-08-26 14:08:13,507 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:13,507 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:08:13,507 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:13,507 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-3r_vpqk1
-2022-08-26 14:08:13,507 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:13,510 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42473', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:13,510 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42473
-2022-08-26 14:08:13,511 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:13,511 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:44801', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:13,511 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:44801
-2022-08-26 14:08:13,511 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:13,511 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:42127
-2022-08-26 14:08:13,511 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:13,512 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:42127
-2022-08-26 14:08:13,512 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:13,512 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:13,512 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:13,526 - distributed.scheduler - INFO - Receive client connection: Client-326bf7be-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:13,526 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:13,534 - distributed.worker - INFO - Run out-of-band function 'publish'
-2022-08-26 14:08:13,557 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:13,557 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:13,675 - distributed.scheduler - INFO - Remove client Client-326bf7be-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:13,675 - distributed.scheduler - INFO - Remove client Client-326bf7be-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:13,675 - distributed.scheduler - INFO - Close client connection: Client-326bf7be-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:13,676 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42473
-2022-08-26 14:08:13,676 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:44801
-2022-08-26 14:08:13,677 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:42473', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:13,677 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:42473
-2022-08-26 14:08:13,677 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:44801', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:13,677 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:44801
-2022-08-26 14:08:13,678 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:08:13,678 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-cd4eb7ba-6584-4258-91ff-0aaf0d7dca05 Address tcp://127.0.0.1:42473 Status: Status.closing
-2022-08-26 14:08:13,678 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-63656732-4507-4343-b0ec-1cc0a88b41eb Address tcp://127.0.0.1:44801 Status: Status.closing
-2022-08-26 14:08:13,678 - distributed.scheduler - ERROR - broadcast to tcp://127.0.0.1:42473 failed: CommClosedError: in <TCP (closed) Scheduler Broadcast local=tcp://127.0.0.1:33060 remote=tcp://127.0.0.1:42473>: Stream is closed
-2022-08-26 14:08:13,680 - tornado.application - ERROR - Exception in callback functools.partial(<bound method PubSubWorkerExtension.cleanup of <distributed.pubsub.PubSubWorkerExtension object at 0x564041bae730>>)
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/tornado/ioloop.py", line 740, in _run_callback
-    ret = callback()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/pubsub.py", line 168, in cleanup
-    self.worker.batched_stream.send(msg)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/batched.py", line 156, in send
-    raise CommClosedError(f"Comm {self.comm!r} already closed.")
-distributed.comm.core.CommClosedError: Comm <TCP (closed) Worker->Scheduler local=tcp://127.0.0.1:58290 remote=tcp://127.0.0.1:42127> already closed.
-2022-08-26 14:08:13,680 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:08:13,680 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:08:13,883 - distributed.utils_perf - WARNING - full garbage collections took 75% CPU time recently (threshold: 10%)
-XPASS (flaky and re-fai...)
-distributed/tests/test_queues.py::test_queue 2022-08-26 14:08:13,889 - distributed.scheduler - INFO - State start
-2022-08-26 14:08:13,891 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:08:13,891 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:39235
-2022-08-26 14:08:13,891 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:37043
-2022-08-26 14:08:13,895 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:46135
-2022-08-26 14:08:13,895 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:46135
-2022-08-26 14:08:13,896 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:08:13,896 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40191
-2022-08-26 14:08:13,896 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39235
-2022-08-26 14:08:13,896 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:13,896 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:13,896 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:13,896 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-p9d8zldf
-2022-08-26 14:08:13,896 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:13,896 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34967
-2022-08-26 14:08:13,896 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34967
-2022-08-26 14:08:13,896 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:08:13,896 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40885
-2022-08-26 14:08:13,897 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39235
-2022-08-26 14:08:13,897 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:13,897 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:08:13,897 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:13,897 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-eyzl8loo
-2022-08-26 14:08:13,897 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:13,900 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:46135', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:13,900 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:46135
-2022-08-26 14:08:13,900 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:13,900 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:34967', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:13,900 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:34967
-2022-08-26 14:08:13,901 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:13,901 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39235
-2022-08-26 14:08:13,901 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:13,901 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39235
-2022-08-26 14:08:13,901 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:13,901 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:13,901 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:13,915 - distributed.scheduler - INFO - Receive client connection: Client-32a761a3-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:13,915 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:14,022 - distributed.core - ERROR - Exception while handling op queue_get
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/queues.py", line 159, in get
-    await getter
-asyncio.exceptions.CancelledError
-
-During handling of the above exception, another exception occurred:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/tasks.py", line 456, in wait_for
-    return fut.result()
-asyncio.exceptions.CancelledError
-
-The above exception was the direct cause of the following exception:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 770, in _handle_comm
-    result = await result
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/queues.py", line 119, in get
-    record = await asyncio.wait_for(self.queues[name].get(), timeout=timeout)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/tasks.py", line 458, in wait_for
-    raise exceptions.TimeoutError() from exc
-asyncio.exceptions.TimeoutError
-2022-08-26 14:08:14,146 - distributed.scheduler - INFO - Remove client Client-32a761a3-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:14,146 - distributed.scheduler - INFO - Remove client Client-32a761a3-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:14,146 - distributed.scheduler - INFO - Close client connection: Client-32a761a3-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:14,147 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:46135
-2022-08-26 14:08:14,147 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:34967
-2022-08-26 14:08:14,148 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:46135', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:14,148 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:46135
-2022-08-26 14:08:14,148 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:34967', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:14,148 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:34967
-2022-08-26 14:08:14,148 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:08:14,148 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-c8c91b98-6dbe-4e3f-a7cc-b738009183b0 Address tcp://127.0.0.1:46135 Status: Status.closing
-2022-08-26 14:08:14,149 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-16a90e8f-b245-46bb-97cc-805276a13ae3 Address tcp://127.0.0.1:34967 Status: Status.closing
-2022-08-26 14:08:14,150 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:08:14,150 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:08:14,353 - distributed.utils_perf - WARNING - full garbage collections took 75% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_queues.py::test_queue_with_data 2022-08-26 14:08:14,359 - distributed.scheduler - INFO - State start
-2022-08-26 14:08:14,360 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:08:14,361 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:35843
-2022-08-26 14:08:14,361 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:41281
-2022-08-26 14:08:14,365 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34055
-2022-08-26 14:08:14,365 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34055
-2022-08-26 14:08:14,365 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:08:14,365 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34539
-2022-08-26 14:08:14,365 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35843
-2022-08-26 14:08:14,365 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:14,365 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:14,365 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:14,365 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-mybd8eiw
-2022-08-26 14:08:14,365 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:14,366 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43989
-2022-08-26 14:08:14,366 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43989
-2022-08-26 14:08:14,366 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:08:14,366 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40447
-2022-08-26 14:08:14,366 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35843
-2022-08-26 14:08:14,366 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:14,366 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:08:14,366 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:14,366 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-5h8aokf6
-2022-08-26 14:08:14,366 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:14,369 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:34055', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:14,369 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:34055
-2022-08-26 14:08:14,369 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:14,370 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:43989', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:14,370 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:43989
-2022-08-26 14:08:14,370 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:14,370 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35843
-2022-08-26 14:08:14,370 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:14,371 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35843
-2022-08-26 14:08:14,371 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:14,371 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:14,371 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:14,385 - distributed.scheduler - INFO - Receive client connection: Client-32ef09b8-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:14,385 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:14,488 - distributed.core - ERROR - Exception while handling op queue_get
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/queues.py", line 159, in get
-    await getter
-asyncio.exceptions.CancelledError
-
-During handling of the above exception, another exception occurred:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/tasks.py", line 456, in wait_for
-    return fut.result()
-asyncio.exceptions.CancelledError
-
-The above exception was the direct cause of the following exception:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 770, in _handle_comm
-    result = await result
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/queues.py", line 119, in get
-    record = await asyncio.wait_for(self.queues[name].get(), timeout=timeout)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/tasks.py", line 458, in wait_for
-    raise exceptions.TimeoutError() from exc
-asyncio.exceptions.TimeoutError
-2022-08-26 14:08:14,490 - distributed.scheduler - INFO - Remove client Client-32ef09b8-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:14,490 - distributed.scheduler - INFO - Remove client Client-32ef09b8-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:14,490 - distributed.scheduler - INFO - Close client connection: Client-32ef09b8-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:14,490 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:34055
-2022-08-26 14:08:14,491 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:43989
-2022-08-26 14:08:14,492 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:34055', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:14,492 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:34055
-2022-08-26 14:08:14,492 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:43989', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:14,492 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:43989
-2022-08-26 14:08:14,492 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:08:14,492 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-c689b7a9-c3a4-461b-bcec-0f22dddb9e40 Address tcp://127.0.0.1:34055 Status: Status.closing
-2022-08-26 14:08:14,492 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-35fc36ab-9c06-48a2-8e9d-dc98b42bb4e9 Address tcp://127.0.0.1:43989 Status: Status.closing
-2022-08-26 14:08:14,493 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:08:14,493 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:08:14,696 - distributed.utils_perf - WARNING - full garbage collections took 76% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_queues.py::test_sync 2022-08-26 14:08:15,558 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:08:15,560 - distributed.scheduler - INFO - State start
-2022-08-26 14:08:15,563 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:08:15,563 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:34077
-2022-08-26 14:08:15,563 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:08:15,588 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42449
-2022-08-26 14:08:15,588 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42449
-2022-08-26 14:08:15,588 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35817
-2022-08-26 14:08:15,588 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34077
-2022-08-26 14:08:15,588 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:15,588 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:15,588 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:15,588 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-sg8lyrq_
-2022-08-26 14:08:15,588 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:15,610 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34213
-2022-08-26 14:08:15,610 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34213
-2022-08-26 14:08:15,610 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33207
-2022-08-26 14:08:15,610 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34077
-2022-08-26 14:08:15,610 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:15,610 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:15,610 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:15,610 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-vr6g6tn8
-2022-08-26 14:08:15,610 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:15,872 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:34213', status: init, memory: 0, processing: 0>
-2022-08-26 14:08:16,133 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:34213
-2022-08-26 14:08:16,133 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:16,134 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34077
-2022-08-26 14:08:16,134 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:16,134 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42449', status: init, memory: 0, processing: 0>
-2022-08-26 14:08:16,135 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42449
-2022-08-26 14:08:16,135 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:16,135 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:16,135 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34077
-2022-08-26 14:08:16,135 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:16,136 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:16,141 - distributed.scheduler - INFO - Receive client connection: Client-33faf7be-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:16,141 - distributed.core - INFO - Starting established connection
-PASSED2022-08-26 14:08:16,164 - distributed.scheduler - INFO - Remove client Client-33faf7be-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:16,164 - distributed.scheduler - INFO - Remove client Client-33faf7be-2583-11ed-a99d-00d861bc4509
-
-distributed/tests/test_queues.py::test_hold_futures 2022-08-26 14:08:16,177 - distributed.scheduler - INFO - State start
-2022-08-26 14:08:16,179 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:08:16,179 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:42769
-2022-08-26 14:08:16,179 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:42925
-2022-08-26 14:08:16,179 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-sg8lyrq_', purging
-2022-08-26 14:08:16,180 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-vr6g6tn8', purging
-2022-08-26 14:08:16,184 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40959
-2022-08-26 14:08:16,184 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40959
-2022-08-26 14:08:16,184 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:08:16,184 - distributed.worker - INFO -          dashboard at:            127.0.0.1:37159
-2022-08-26 14:08:16,184 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:42769
-2022-08-26 14:08:16,184 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:16,184 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:16,184 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:16,184 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-i_vb4cul
-2022-08-26 14:08:16,184 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:16,185 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39365
-2022-08-26 14:08:16,185 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39365
-2022-08-26 14:08:16,185 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:08:16,185 - distributed.worker - INFO -          dashboard at:            127.0.0.1:37401
-2022-08-26 14:08:16,185 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:42769
-2022-08-26 14:08:16,185 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:16,185 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:08:16,185 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:16,185 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-d7jdsney
-2022-08-26 14:08:16,185 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:16,188 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40959', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:16,188 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40959
-2022-08-26 14:08:16,188 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:16,189 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39365', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:16,189 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39365
-2022-08-26 14:08:16,189 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:16,189 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:42769
-2022-08-26 14:08:16,189 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:16,190 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:42769
-2022-08-26 14:08:16,190 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:16,190 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:16,190 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:16,204 - distributed.scheduler - INFO - Receive client connection: Client-34049a73-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:16,204 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:16,217 - distributed.scheduler - INFO - Remove client Client-34049a73-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:16,217 - distributed.scheduler - INFO - Remove client Client-34049a73-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:16,218 - distributed.scheduler - INFO - Close client connection: Client-34049a73-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:16,323 - distributed.scheduler - INFO - Receive client connection: Client-3416c6f8-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:16,323 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:16,334 - distributed.scheduler - INFO - Remove client Client-3416c6f8-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:16,334 - distributed.scheduler - INFO - Remove client Client-3416c6f8-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:16,335 - distributed.batched - INFO - Batched Comm Closed <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:42769 remote=tcp://127.0.0.1:43096>
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/batched.py", line 115, in _background_send
-    nbytes = yield coro
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/tornado/gen.py", line 769, in run
-    value = future.result()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 269, in write
-    raise CommClosedError()
-distributed.comm.core.CommClosedError
-2022-08-26 14:08:16,335 - distributed.scheduler - INFO - Close client connection: Client-3416c6f8-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:16,336 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:40959
-2022-08-26 14:08:16,336 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:39365
-2022-08-26 14:08:16,337 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:40959', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:16,337 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:40959
-2022-08-26 14:08:16,337 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-1ad1b942-f911-4518-82d8-47fcab95063c Address tcp://127.0.0.1:40959 Status: Status.closing
-2022-08-26 14:08:16,338 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-e3a5552d-4a17-4df2-b409-448e8e24c1df Address tcp://127.0.0.1:39365 Status: Status.closing
-2022-08-26 14:08:16,338 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:39365', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:16,338 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:39365
-2022-08-26 14:08:16,338 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:08:16,339 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:08:16,339 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:08:16,542 - distributed.utils_perf - WARNING - full garbage collections took 75% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_queues.py::test_picklability SKIPPED (getting...)
-distributed/tests/test_queues.py::test_picklability_sync 2022-08-26 14:08:17,406 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:08:17,408 - distributed.scheduler - INFO - State start
-2022-08-26 14:08:17,411 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:08:17,411 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:42637
-2022-08-26 14:08:17,411 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:08:17,429 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34697
-2022-08-26 14:08:17,429 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34697
-2022-08-26 14:08:17,429 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41337
-2022-08-26 14:08:17,430 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:42637
-2022-08-26 14:08:17,430 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:17,430 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:17,430 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:17,430 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-9l87vxof
-2022-08-26 14:08:17,430 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:17,469 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35047
-2022-08-26 14:08:17,469 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35047
-2022-08-26 14:08:17,469 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33219
-2022-08-26 14:08:17,469 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:42637
-2022-08-26 14:08:17,469 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:17,469 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:17,469 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:17,469 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-g8w0ued1
-2022-08-26 14:08:17,469 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:17,719 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:34697', status: init, memory: 0, processing: 0>
-2022-08-26 14:08:17,981 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:34697
-2022-08-26 14:08:17,981 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:17,981 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:42637
-2022-08-26 14:08:17,982 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:17,982 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:35047', status: init, memory: 0, processing: 0>
-2022-08-26 14:08:17,983 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:35047
-2022-08-26 14:08:17,983 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:17,983 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:17,983 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:42637
-2022-08-26 14:08:17,983 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:17,984 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:17,988 - distributed.scheduler - INFO - Receive client connection: Client-3514ecc5-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:17,989 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:18,006 - distributed.scheduler - INFO - Receive client connection: Client-worker-351741fa-2583-11ed-be5f-00d861bc4509
-2022-08-26 14:08:18,007 - distributed.core - INFO - Starting established connection
-PASSED2022-08-26 14:08:18,020 - distributed.scheduler - INFO - Remove client Client-3514ecc5-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:18,020 - distributed.scheduler - INFO - Remove client Client-3514ecc5-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:18,021 - distributed.scheduler - INFO - Close client connection: Client-3514ecc5-2583-11ed-a99d-00d861bc4509
-
-distributed/tests/test_queues.py::test_race SKIPPED (need --runslow ...)
-distributed/tests/test_queues.py::test_same_futures 2022-08-26 14:08:18,035 - distributed.scheduler - INFO - State start
-2022-08-26 14:08:18,037 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:08:18,037 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:42621
-2022-08-26 14:08:18,037 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:43405
-2022-08-26 14:08:18,038 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-g8w0ued1', purging
-2022-08-26 14:08:18,038 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-9l87vxof', purging
-2022-08-26 14:08:18,042 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:41857
-2022-08-26 14:08:18,042 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:41857
-2022-08-26 14:08:18,042 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:08:18,042 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45093
-2022-08-26 14:08:18,042 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:42621
-2022-08-26 14:08:18,042 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:18,043 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:18,043 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:18,043 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-pwt9brii
-2022-08-26 14:08:18,043 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:18,043 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:41339
-2022-08-26 14:08:18,043 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:41339
-2022-08-26 14:08:18,043 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:08:18,043 - distributed.worker - INFO -          dashboard at:            127.0.0.1:44131
-2022-08-26 14:08:18,043 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:42621
-2022-08-26 14:08:18,043 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:18,043 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:08:18,043 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:18,044 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-qaw7qdmw
-2022-08-26 14:08:18,044 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:18,046 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:41857', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:18,047 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:41857
-2022-08-26 14:08:18,047 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:18,047 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:41339', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:18,047 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:41339
-2022-08-26 14:08:18,047 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:18,048 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:42621
-2022-08-26 14:08:18,048 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:18,048 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:42621
-2022-08-26 14:08:18,048 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:18,048 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:18,048 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:18,062 - distributed.scheduler - INFO - Receive client connection: Client-35202726-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:18,062 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:18,295 - distributed.scheduler - INFO - Remove client Client-35202726-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:18,296 - distributed.scheduler - INFO - Remove client Client-35202726-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:18,296 - distributed.scheduler - INFO - Close client connection: Client-35202726-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:18,296 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:41857
-2022-08-26 14:08:18,297 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:41339
-2022-08-26 14:08:18,298 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:41857', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:18,298 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:41857
-2022-08-26 14:08:18,298 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:41339', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:18,298 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:41339
-2022-08-26 14:08:18,298 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:08:18,298 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-1c099ad0-8119-4796-81b3-319977e41ee3 Address tcp://127.0.0.1:41857 Status: Status.closing
-2022-08-26 14:08:18,298 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-01d2370f-52bb-4a28-9991-ede05e86e67b Address tcp://127.0.0.1:41339 Status: Status.closing
-2022-08-26 14:08:18,299 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:08:18,299 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:08:18,503 - distributed.utils_perf - WARNING - full garbage collections took 75% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_queues.py::test_get_many 2022-08-26 14:08:18,509 - distributed.scheduler - INFO - State start
-2022-08-26 14:08:18,511 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:08:18,511 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:43585
-2022-08-26 14:08:18,511 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:45839
-2022-08-26 14:08:18,515 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42211
-2022-08-26 14:08:18,515 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42211
-2022-08-26 14:08:18,515 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:08:18,516 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39179
-2022-08-26 14:08:18,516 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43585
-2022-08-26 14:08:18,516 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:18,516 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:18,516 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:18,516 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-esq7qx3a
-2022-08-26 14:08:18,516 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:18,516 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42545
-2022-08-26 14:08:18,516 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42545
-2022-08-26 14:08:18,516 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:08:18,516 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33435
-2022-08-26 14:08:18,516 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43585
-2022-08-26 14:08:18,517 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:18,517 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:08:18,517 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:18,517 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-juy4yf1z
-2022-08-26 14:08:18,517 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:18,520 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42211', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:18,520 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42211
-2022-08-26 14:08:18,520 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:18,520 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42545', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:18,521 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42545
-2022-08-26 14:08:18,521 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:18,521 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43585
-2022-08-26 14:08:18,521 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:18,521 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43585
-2022-08-26 14:08:18,521 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:18,521 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:18,521 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:18,535 - distributed.scheduler - INFO - Receive client connection: Client-3568585a-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:18,535 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:18,642 - distributed.scheduler - INFO - Remove client Client-3568585a-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:18,642 - distributed.scheduler - INFO - Remove client Client-3568585a-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:18,643 - distributed.scheduler - INFO - Close client connection: Client-3568585a-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:18,643 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42211
-2022-08-26 14:08:18,643 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42545
-2022-08-26 14:08:18,644 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:42211', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:18,644 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:42211
-2022-08-26 14:08:18,644 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:42545', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:18,644 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:42545
-2022-08-26 14:08:18,644 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:08:18,644 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-5cd311ef-d5eb-48ff-afea-cd8b54f73b97 Address tcp://127.0.0.1:42211 Status: Status.closing
-2022-08-26 14:08:18,645 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-8bd206c5-08ec-41bc-ba5e-d125b3faad32 Address tcp://127.0.0.1:42545 Status: Status.closing
-2022-08-26 14:08:18,645 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:08:18,646 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:08:18,849 - distributed.utils_perf - WARNING - full garbage collections took 75% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_queues.py::test_Future_knows_status_immediately 2022-08-26 14:08:18,855 - distributed.scheduler - INFO - State start
-2022-08-26 14:08:18,857 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:08:18,857 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:42117
-2022-08-26 14:08:18,857 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:36235
-2022-08-26 14:08:18,861 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38487
-2022-08-26 14:08:18,861 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38487
-2022-08-26 14:08:18,862 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:08:18,862 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45785
-2022-08-26 14:08:18,862 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:42117
-2022-08-26 14:08:18,862 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:18,862 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:18,862 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:18,862 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-rrp5owmc
-2022-08-26 14:08:18,862 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:18,862 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42811
-2022-08-26 14:08:18,862 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42811
-2022-08-26 14:08:18,862 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:08:18,862 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45357
-2022-08-26 14:08:18,863 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:42117
-2022-08-26 14:08:18,863 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:18,863 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:08:18,863 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:18,863 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-tbd46x65
-2022-08-26 14:08:18,863 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:18,865 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38487', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:18,866 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38487
-2022-08-26 14:08:18,866 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:18,866 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42811', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:18,866 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42811
-2022-08-26 14:08:18,866 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:18,867 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:42117
-2022-08-26 14:08:18,867 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:18,867 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:42117
-2022-08-26 14:08:18,867 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:18,867 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:18,867 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:18,881 - distributed.scheduler - INFO - Receive client connection: Client-359d1ef6-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:18,881 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:18,888 - distributed.scheduler - INFO - Receive client connection: Client-359e3686-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:18,888 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:18,894 - distributed.worker - WARNING - Compute Failed
-Key:       div-beaac0206246b34d3625d21194e03c13
-Function:  div
-args:      (1, 0)
-kwargs:    {}
-Exception: "ZeroDivisionError('division by zero')"
-
-2022-08-26 14:08:18,899 - distributed.scheduler - INFO - Remove client Client-359e3686-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:18,900 - distributed.scheduler - INFO - Remove client Client-359e3686-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:18,900 - distributed.batched - INFO - Batched Comm Closed <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:42117 remote=tcp://127.0.0.1:46978>
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/batched.py", line 115, in _background_send
-    nbytes = yield coro
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/tornado/gen.py", line 769, in run
-    value = future.result()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 269, in write
-    raise CommClosedError()
-distributed.comm.core.CommClosedError
-2022-08-26 14:08:18,900 - distributed.scheduler - INFO - Close client connection: Client-359e3686-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:18,903 - distributed.scheduler - INFO - Remove client Client-359d1ef6-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:18,903 - distributed.scheduler - INFO - Remove client Client-359d1ef6-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:18,903 - distributed.scheduler - INFO - Close client connection: Client-359d1ef6-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:18,904 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:38487
-2022-08-26 14:08:18,904 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42811
-2022-08-26 14:08:18,905 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:38487', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:18,905 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:38487
-2022-08-26 14:08:18,905 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:42811', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:18,906 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:42811
-2022-08-26 14:08:18,906 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:08:18,906 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-47b3e3cf-fad0-4feb-992b-5039887ccf1d Address tcp://127.0.0.1:38487 Status: Status.closing
-2022-08-26 14:08:18,906 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-1e88cb4e-d5ac-4463-89a6-72d76b97557e Address tcp://127.0.0.1:42811 Status: Status.closing
-2022-08-26 14:08:18,907 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:08:18,907 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:08:19,111 - distributed.utils_perf - WARNING - full garbage collections took 75% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_queues.py::test_erred_future 2022-08-26 14:08:19,117 - distributed.scheduler - INFO - State start
-2022-08-26 14:08:19,119 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:08:19,119 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:43703
-2022-08-26 14:08:19,119 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:36041
-2022-08-26 14:08:19,123 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40061
-2022-08-26 14:08:19,124 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40061
-2022-08-26 14:08:19,124 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:08:19,124 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41479
-2022-08-26 14:08:19,124 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43703
-2022-08-26 14:08:19,124 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:19,124 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:19,124 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:19,124 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-zdr7alxq
-2022-08-26 14:08:19,124 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:19,124 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45117
-2022-08-26 14:08:19,124 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45117
-2022-08-26 14:08:19,124 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:08:19,125 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33933
-2022-08-26 14:08:19,125 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43703
-2022-08-26 14:08:19,125 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:19,125 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:08:19,125 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:19,125 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ab2vq0qp
-2022-08-26 14:08:19,125 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:19,128 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40061', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:19,128 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40061
-2022-08-26 14:08:19,128 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:19,128 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45117', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:19,129 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45117
-2022-08-26 14:08:19,129 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:19,129 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43703
-2022-08-26 14:08:19,129 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:19,129 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43703
-2022-08-26 14:08:19,129 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:19,129 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:19,130 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:19,143 - distributed.scheduler - INFO - Receive client connection: Client-35c522aa-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:19,143 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:19,158 - distributed.worker - WARNING - Compute Failed
-Key:       div-beaac0206246b34d3625d21194e03c13
-Function:  div
-args:      (1, 0)
-kwargs:    {}
-Exception: "ZeroDivisionError('division by zero')"
-
-2022-08-26 14:08:19,259 - distributed.scheduler - INFO - Remove client Client-35c522aa-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:19,259 - distributed.scheduler - INFO - Remove client Client-35c522aa-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:19,259 - distributed.scheduler - INFO - Close client connection: Client-35c522aa-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:19,260 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:40061
-2022-08-26 14:08:19,260 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:45117
-2022-08-26 14:08:19,261 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:40061', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:19,261 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:40061
-2022-08-26 14:08:19,261 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:45117', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:19,261 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:45117
-2022-08-26 14:08:19,261 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:08:19,261 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-2573f150-5113-4624-9bad-de715b552ed3 Address tcp://127.0.0.1:40061 Status: Status.closing
-2022-08-26 14:08:19,262 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-ba483b35-e864-46e7-bed7-18234e14464f Address tcp://127.0.0.1:45117 Status: Status.closing
-2022-08-26 14:08:19,262 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:08:19,263 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:08:19,467 - distributed.utils_perf - WARNING - full garbage collections took 75% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_queues.py::test_close 2022-08-26 14:08:19,473 - distributed.scheduler - INFO - State start
-2022-08-26 14:08:19,474 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:08:19,475 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:33675
-2022-08-26 14:08:19,475 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:39761
-2022-08-26 14:08:19,479 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42879
-2022-08-26 14:08:19,479 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42879
-2022-08-26 14:08:19,479 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:08:19,479 - distributed.worker - INFO -          dashboard at:            127.0.0.1:37645
-2022-08-26 14:08:19,479 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33675
-2022-08-26 14:08:19,479 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:19,479 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:19,479 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:19,479 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-tglzovrl
-2022-08-26 14:08:19,479 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:19,480 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42035
-2022-08-26 14:08:19,480 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42035
-2022-08-26 14:08:19,480 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:08:19,480 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46749
-2022-08-26 14:08:19,480 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33675
-2022-08-26 14:08:19,480 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:19,480 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:08:19,480 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:19,480 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-isltbtzo
-2022-08-26 14:08:19,480 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:19,483 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42879', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:19,483 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42879
-2022-08-26 14:08:19,483 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:19,484 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42035', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:19,484 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42035
-2022-08-26 14:08:19,484 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:19,484 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33675
-2022-08-26 14:08:19,484 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:19,485 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33675
-2022-08-26 14:08:19,485 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:19,485 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:19,485 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:19,499 - distributed.scheduler - INFO - Receive client connection: Client-35fb5e83-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:19,499 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:19,522 - distributed.scheduler - INFO - Remove client Client-35fb5e83-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:19,522 - distributed.scheduler - INFO - Remove client Client-35fb5e83-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:19,522 - distributed.scheduler - INFO - Close client connection: Client-35fb5e83-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:19,523 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42879
-2022-08-26 14:08:19,523 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42035
-2022-08-26 14:08:19,524 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:42879', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:19,524 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:42879
-2022-08-26 14:08:19,524 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:42035', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:19,524 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:42035
-2022-08-26 14:08:19,524 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:08:19,524 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-2900b998-f662-4fba-8b33-42745c0d2c3f Address tcp://127.0.0.1:42879 Status: Status.closing
-2022-08-26 14:08:19,525 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-9bfd1980-065a-44d6-8477-319744478183 Address tcp://127.0.0.1:42035 Status: Status.closing
-2022-08-26 14:08:19,525 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:08:19,526 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:08:19,729 - distributed.utils_perf - WARNING - full garbage collections took 75% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_queues.py::test_timeout 2022-08-26 14:08:19,735 - distributed.scheduler - INFO - State start
-2022-08-26 14:08:19,737 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:08:19,737 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:46037
-2022-08-26 14:08:19,737 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:39981
-2022-08-26 14:08:19,741 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39441
-2022-08-26 14:08:19,741 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39441
-2022-08-26 14:08:19,741 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:08:19,741 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45105
-2022-08-26 14:08:19,741 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:46037
-2022-08-26 14:08:19,741 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:19,741 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:19,742 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:19,742 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-zp4cqm_7
-2022-08-26 14:08:19,742 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:19,742 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35903
-2022-08-26 14:08:19,742 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35903
-2022-08-26 14:08:19,742 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:08:19,742 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40893
-2022-08-26 14:08:19,742 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:46037
-2022-08-26 14:08:19,742 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:19,742 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:08:19,742 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:19,743 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-qu1px_nf
-2022-08-26 14:08:19,743 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:19,745 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39441', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:19,746 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39441
-2022-08-26 14:08:19,746 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:19,746 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:35903', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:19,746 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:35903
-2022-08-26 14:08:19,746 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:19,747 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:46037
-2022-08-26 14:08:19,747 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:19,747 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:46037
-2022-08-26 14:08:19,747 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:19,747 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:19,747 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:19,761 - distributed.scheduler - INFO - Receive client connection: Client-362365d9-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:19,761 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:20,064 - distributed.core - ERROR - Exception while handling op queue_get
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/queues.py", line 159, in get
-    await getter
-asyncio.exceptions.CancelledError
-
-During handling of the above exception, another exception occurred:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/tasks.py", line 456, in wait_for
-    return fut.result()
-asyncio.exceptions.CancelledError
-
-The above exception was the direct cause of the following exception:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 770, in _handle_comm
-    result = await result
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/queues.py", line 119, in get
-    record = await asyncio.wait_for(self.queues[name].get(), timeout=timeout)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/tasks.py", line 458, in wait_for
-    raise exceptions.TimeoutError() from exc
-asyncio.exceptions.TimeoutError
-2022-08-26 14:08:20,368 - distributed.core - ERROR - Exception while handling op queue_put
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/queues.py", line 121, in put
-    await putter
-asyncio.exceptions.CancelledError
-
-During handling of the above exception, another exception occurred:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/tasks.py", line 456, in wait_for
-    return fut.result()
-asyncio.exceptions.CancelledError
-
-The above exception was the direct cause of the following exception:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 770, in _handle_comm
-    result = await result
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/queues.py", line 75, in put
-    await asyncio.wait_for(self.queues[name].put(record), timeout=timeout)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/tasks.py", line 458, in wait_for
-    raise exceptions.TimeoutError() from exc
-asyncio.exceptions.TimeoutError
-2022-08-26 14:08:20,370 - distributed.scheduler - INFO - Remove client Client-362365d9-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:20,370 - distributed.scheduler - INFO - Remove client Client-362365d9-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:20,371 - distributed.scheduler - INFO - Close client connection: Client-362365d9-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:20,371 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:39441
-2022-08-26 14:08:20,371 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:35903
-2022-08-26 14:08:20,372 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:39441', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:20,372 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:39441
-2022-08-26 14:08:20,372 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:35903', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:20,373 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:35903
-2022-08-26 14:08:20,373 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:08:20,373 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-d1d6538c-cd78-43c9-99b5-98eb3b73b29a Address tcp://127.0.0.1:39441 Status: Status.closing
-2022-08-26 14:08:20,373 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-2b940c84-83f9-42f3-9801-12ac83c97cbf Address tcp://127.0.0.1:35903 Status: Status.closing
-2022-08-26 14:08:20,374 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:08:20,374 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:08:20,578 - distributed.utils_perf - WARNING - full garbage collections took 75% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_queues.py::test_2220 2022-08-26 14:08:20,584 - distributed.scheduler - INFO - State start
-2022-08-26 14:08:20,586 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:08:20,586 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:44215
-2022-08-26 14:08:20,586 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:45021
-2022-08-26 14:08:20,590 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:44937
-2022-08-26 14:08:20,590 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:44937
-2022-08-26 14:08:20,590 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:08:20,591 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46527
-2022-08-26 14:08:20,591 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44215
-2022-08-26 14:08:20,591 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:20,591 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:20,591 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:20,591 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-hb_qg4e6
-2022-08-26 14:08:20,591 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:20,591 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34669
-2022-08-26 14:08:20,591 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34669
-2022-08-26 14:08:20,591 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:08:20,591 - distributed.worker - INFO -          dashboard at:            127.0.0.1:37059
-2022-08-26 14:08:20,592 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44215
-2022-08-26 14:08:20,592 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:20,592 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:08:20,592 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:20,592 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-66ft9l9r
-2022-08-26 14:08:20,592 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:20,595 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:44937', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:20,595 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:44937
-2022-08-26 14:08:20,595 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:20,595 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:34669', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:20,596 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:34669
-2022-08-26 14:08:20,596 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:20,596 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44215
-2022-08-26 14:08:20,596 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:20,596 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44215
-2022-08-26 14:08:20,596 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:20,596 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:20,597 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:20,610 - distributed.scheduler - INFO - Receive client connection: Client-36a4f8e3-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:20,610 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:20,646 - distributed.scheduler - INFO - Remove client Client-36a4f8e3-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:20,646 - distributed.scheduler - INFO - Remove client Client-36a4f8e3-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:20,646 - distributed.scheduler - INFO - Close client connection: Client-36a4f8e3-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:20,647 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:44937
-2022-08-26 14:08:20,647 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:34669
-2022-08-26 14:08:20,648 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:44937', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:20,648 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:44937
-2022-08-26 14:08:20,648 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:34669', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:20,648 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:34669
-2022-08-26 14:08:20,648 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:08:20,649 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-80efdd44-b449-4fc7-9769-5b594ffe0ad6 Address tcp://127.0.0.1:44937 Status: Status.closing
-2022-08-26 14:08:20,649 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-e1fa3d6b-917f-48a1-ba1a-5441398719bc Address tcp://127.0.0.1:34669 Status: Status.closing
-2022-08-26 14:08:20,650 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:08:20,650 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:08:20,854 - distributed.utils_perf - WARNING - full garbage collections took 75% CPU time recently (threshold: 10%)
-55
-PASSED
-distributed/tests/test_queues.py::test_queue_in_task 2022-08-26 14:08:21,227 - distributed.scheduler - INFO - -----------------------------------------------
-2022-08-26 14:08:21,230 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:08:21,232 - distributed.scheduler - INFO - State start
-2022-08-26 14:08:21,234 - distributed.scheduler - INFO - -----------------------------------------------
-2022-08-26 14:08:21,234 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:08:21,234 - distributed.scheduler - INFO -   Scheduler at: tcp://192.168.1.159:53551
-2022-08-26 14:08:21,235 - distributed.scheduler - INFO -   dashboard at:                     :8787
-2022-08-26 14:08:21,244 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:32859'
-2022-08-26 14:08:21,670 - distributed.scheduler - INFO - Receive client connection: Client-36cb8f7f-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:21,833 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:22,016 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42237
-2022-08-26 14:08:22,016 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42237
-2022-08-26 14:08:22,016 - distributed.worker - INFO -          dashboard at:            127.0.0.1:37107
-2022-08-26 14:08:22,016 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:53551
-2022-08-26 14:08:22,016 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:22,016 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:08:22,016 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:22,016 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-jchet2pi
-2022-08-26 14:08:22,016 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:22,020 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42237', status: init, memory: 0, processing: 0>
-2022-08-26 14:08:22,020 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42237
-2022-08-26 14:08:22,020 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:22,020 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:53551
-2022-08-26 14:08:22,020 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:22,021 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:22,045 - distributed.scheduler - INFO - Receive client connection: Client-worker-377fb2aa-2583-11ed-bf3d-00d861bc4509
-2022-08-26 14:08:22,045 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:22,061 - distributed.scheduler - INFO - Remove client Client-36cb8f7f-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:22,061 - distributed.scheduler - INFO - Remove client Client-36cb8f7f-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:22,061 - distributed.scheduler - INFO - Close client connection: Client-36cb8f7f-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:22,061 - distributed._signals - INFO - Received signal SIGINT (2)
-2022-08-26 14:08:22,061 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:32859'.
-2022-08-26 14:08:22,061 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:08:22,062 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42237
-2022-08-26 14:08:22,063 - distributed.scheduler - INFO - Remove client Client-worker-377fb2aa-2583-11ed-bf3d-00d861bc4509
-2022-08-26 14:08:22,063 - distributed.scheduler - INFO - Remove client Client-worker-377fb2aa-2583-11ed-bf3d-00d861bc4509
-2022-08-26 14:08:22,063 - distributed.scheduler - INFO - Close client connection: Client-worker-377fb2aa-2583-11ed-bf3d-00d861bc4509
-2022-08-26 14:08:22,064 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-049260ae-6eb5-4b3a-ab66-8f9211d6636f Address tcp://127.0.0.1:42237 Status: Status.closing
-2022-08-26 14:08:22,064 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:42237', status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:22,064 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:42237
-2022-08-26 14:08:22,064 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:08:22,252 - distributed.dask_worker - INFO - End worker
-2022-08-26 14:08:22,325 - distributed._signals - INFO - Received signal SIGINT (2)
-2022-08-26 14:08:22,325 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:08:22,326 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:08:22,326 - distributed.scheduler - INFO - Stopped scheduler at 'tcp://192.168.1.159:53551'
-2022-08-26 14:08:22,326 - distributed.scheduler - INFO - End scheduler
-PASSED
-distributed/tests/test_reschedule.py::test_scheduler_reschedule 2022-08-26 14:08:22,495 - distributed.scheduler - INFO - State start
-2022-08-26 14:08:22,497 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:08:22,497 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:35027
-2022-08-26 14:08:22,497 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:38219
-2022-08-26 14:08:22,501 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:44303
-2022-08-26 14:08:22,501 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:44303
-2022-08-26 14:08:22,501 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:08:22,501 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36851
-2022-08-26 14:08:22,502 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35027
-2022-08-26 14:08:22,502 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:22,502 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:22,502 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:22,502 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-4hk8dtpg
-2022-08-26 14:08:22,502 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:22,502 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45947
-2022-08-26 14:08:22,502 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45947
-2022-08-26 14:08:22,502 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:08:22,502 - distributed.worker - INFO -          dashboard at:            127.0.0.1:38925
-2022-08-26 14:08:22,503 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35027
-2022-08-26 14:08:22,503 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:22,503 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:22,503 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:22,503 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-vfrxfc6k
-2022-08-26 14:08:22,503 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:22,506 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:44303', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:22,506 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:44303
-2022-08-26 14:08:22,506 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:22,506 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45947', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:22,507 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45947
-2022-08-26 14:08:22,507 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:22,507 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35027
-2022-08-26 14:08:22,507 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:22,507 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35027
-2022-08-26 14:08:22,507 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:22,507 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:22,507 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:22,521 - distributed.scheduler - INFO - Receive client connection: Client-37c88e38-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:22,521 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:22,625 - distributed.scheduler - INFO - Remove client Client-37c88e38-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:22,626 - distributed.scheduler - INFO - Remove client Client-37c88e38-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:22,626 - distributed.scheduler - INFO - Close client connection: Client-37c88e38-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:22,626 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:44303
-2022-08-26 14:08:22,626 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:45947
-2022-08-26 14:08:22,628 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-d02cd815-ba4b-499e-a049-8da7f1d01724 Address tcp://127.0.0.1:44303 Status: Status.closing
-2022-08-26 14:08:22,628 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-d8035231-aa75-42ab-a2d2-4e2fb9138cca Address tcp://127.0.0.1:45947 Status: Status.closing
-2022-08-26 14:08:22,628 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:44303', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:22,629 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:44303
-2022-08-26 14:08:22,629 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:45947', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:22,629 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:45947
-2022-08-26 14:08:22,629 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:08:22,660 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:08:22,660 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:08:22,867 - distributed.utils_perf - WARNING - full garbage collections took 74% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_reschedule.py::test_scheduler_reschedule_warns 2022-08-26 14:08:22,873 - distributed.scheduler - INFO - State start
-2022-08-26 14:08:22,874 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:08:22,875 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:39925
-2022-08-26 14:08:22,875 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:35067
-2022-08-26 14:08:22,879 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35699
-2022-08-26 14:08:22,879 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35699
-2022-08-26 14:08:22,879 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:08:22,879 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43049
-2022-08-26 14:08:22,879 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39925
-2022-08-26 14:08:22,879 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:22,879 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:22,879 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:22,879 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-9z76nte7
-2022-08-26 14:08:22,879 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:22,880 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39919
-2022-08-26 14:08:22,880 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39919
-2022-08-26 14:08:22,880 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:08:22,880 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43335
-2022-08-26 14:08:22,880 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39925
-2022-08-26 14:08:22,880 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:22,880 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:08:22,880 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:22,880 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-2yed4jj2
-2022-08-26 14:08:22,880 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:22,883 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:35699', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:22,883 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:35699
-2022-08-26 14:08:22,883 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:22,884 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39919', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:22,884 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39919
-2022-08-26 14:08:22,884 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:22,884 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39925
-2022-08-26 14:08:22,884 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:22,885 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39925
-2022-08-26 14:08:22,885 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:22,885 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:22,885 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:22,896 - distributed.scheduler - WARNING - Attempting to reschedule task __this-key-does-not-exist__, which was not found on the scheduler. Aborting reschedule.
-2022-08-26 14:08:22,896 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:35699
-2022-08-26 14:08:22,897 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:39919
-2022-08-26 14:08:22,898 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:35699', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:22,898 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:35699
-2022-08-26 14:08:22,898 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:39919', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:22,898 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:39919
-2022-08-26 14:08:22,898 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:08:22,898 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-82789daa-0ce9-4cc0-bfb6-6ababdd77c20 Address tcp://127.0.0.1:35699 Status: Status.closing
-2022-08-26 14:08:22,898 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-a27a84f0-d795-4b09-a977-b417d9a2d3e9 Address tcp://127.0.0.1:39919 Status: Status.closing
-2022-08-26 14:08:22,899 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:08:22,899 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:08:23,108 - distributed.utils_perf - WARNING - full garbage collections took 75% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_reschedule.py::test_raise_reschedule[executing] 2022-08-26 14:08:23,114 - distributed.scheduler - INFO - State start
-2022-08-26 14:08:23,116 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:08:23,116 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:46523
-2022-08-26 14:08:23,116 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:42509
-2022-08-26 14:08:23,120 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:44671
-2022-08-26 14:08:23,121 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:44671
-2022-08-26 14:08:23,121 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:08:23,121 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45755
-2022-08-26 14:08:23,121 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:46523
-2022-08-26 14:08:23,121 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:23,121 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:23,121 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:23,121 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-8rlku16y
-2022-08-26 14:08:23,121 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:23,122 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34157
-2022-08-26 14:08:23,122 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34157
-2022-08-26 14:08:23,122 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:08:23,122 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41701
-2022-08-26 14:08:23,122 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:46523
-2022-08-26 14:08:23,122 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:23,122 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:23,122 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:23,122 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-0r5jptux
-2022-08-26 14:08:23,122 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:23,125 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:44671', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:23,125 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:44671
-2022-08-26 14:08:23,125 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:23,126 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:34157', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:23,126 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:34157
-2022-08-26 14:08:23,126 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:23,126 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:46523
-2022-08-26 14:08:23,126 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:23,127 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:46523
-2022-08-26 14:08:23,127 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:23,127 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:23,127 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:23,141 - distributed.scheduler - INFO - Receive client connection: Client-38271776-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:23,141 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:23,881 - distributed.scheduler - INFO - Remove client Client-38271776-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:23,881 - distributed.scheduler - INFO - Remove client Client-38271776-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:23,881 - distributed.scheduler - INFO - Close client connection: Client-38271776-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:23,882 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:44671
-2022-08-26 14:08:23,882 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:34157
-2022-08-26 14:08:23,883 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:34157', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:23,883 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:34157
-2022-08-26 14:08:23,883 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-92b978ae-5105-4dcd-a963-434f3e7df061 Address tcp://127.0.0.1:34157 Status: Status.closing
-2022-08-26 14:08:23,884 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-83656251-1ca6-44a6-8095-bcf40831f72a Address tcp://127.0.0.1:44671 Status: Status.closing
-2022-08-26 14:08:23,884 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:44671', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:23,884 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:44671
-2022-08-26 14:08:23,884 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:08:23,967 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:08:23,967 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:08:24,171 - distributed.utils_perf - WARNING - full garbage collections took 75% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_reschedule.py::test_raise_reschedule[long-running] 2022-08-26 14:08:24,177 - distributed.scheduler - INFO - State start
-2022-08-26 14:08:24,179 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:08:24,179 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:41413
-2022-08-26 14:08:24,179 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:35389
-2022-08-26 14:08:24,183 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43011
-2022-08-26 14:08:24,183 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43011
-2022-08-26 14:08:24,183 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:08:24,183 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41009
-2022-08-26 14:08:24,183 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:41413
-2022-08-26 14:08:24,183 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:24,183 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:24,184 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:24,184 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ubogsz6f
-2022-08-26 14:08:24,184 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:24,184 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45703
-2022-08-26 14:08:24,184 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45703
-2022-08-26 14:08:24,184 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:08:24,184 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41867
-2022-08-26 14:08:24,184 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:41413
-2022-08-26 14:08:24,184 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:24,184 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:24,184 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:24,185 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-sqf0csgt
-2022-08-26 14:08:24,185 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:24,187 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:43011', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:24,188 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:43011
-2022-08-26 14:08:24,188 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:24,188 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45703', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:24,188 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45703
-2022-08-26 14:08:24,188 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:24,189 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:41413
-2022-08-26 14:08:24,189 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:24,189 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:41413
-2022-08-26 14:08:24,189 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:24,189 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:24,189 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:24,203 - distributed.scheduler - INFO - Receive client connection: Client-38c92f27-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:24,203 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:24,740 - distributed.scheduler - INFO - Remove client Client-38c92f27-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:24,741 - distributed.scheduler - INFO - Remove client Client-38c92f27-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:24,741 - distributed.scheduler - INFO - Close client connection: Client-38c92f27-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:24,741 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:43011
-2022-08-26 14:08:24,741 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:45703
-2022-08-26 14:08:24,743 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:45703', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:24,743 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:45703
-2022-08-26 14:08:24,743 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-dea546f0-e73f-4707-bb18-47e9dbea59fc Address tcp://127.0.0.1:45703 Status: Status.closing
-2022-08-26 14:08:24,743 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-e9434f53-9cf8-40c0-a7fd-080ce35dd3cb Address tcp://127.0.0.1:43011 Status: Status.closing
-2022-08-26 14:08:24,744 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:43011', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:24,744 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:43011
-2022-08-26 14:08:24,744 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:08:24,826 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:08:24,827 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:08:25,031 - distributed.utils_perf - WARNING - full garbage collections took 75% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_reschedule.py::test_cancelled_reschedule[executing] 2022-08-26 14:08:25,037 - distributed.scheduler - INFO - State start
-2022-08-26 14:08:25,038 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:08:25,039 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:37371
-2022-08-26 14:08:25,039 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:33651
-2022-08-26 14:08:25,041 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39893
-2022-08-26 14:08:25,041 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39893
-2022-08-26 14:08:25,041 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:08:25,042 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43971
-2022-08-26 14:08:25,042 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37371
-2022-08-26 14:08:25,042 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:25,042 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:25,042 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:25,042 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-qtfcc1zr
-2022-08-26 14:08:25,042 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:25,044 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39893', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:25,044 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39893
-2022-08-26 14:08:25,044 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:25,044 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37371
-2022-08-26 14:08:25,044 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:25,044 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:25,058 - distributed.scheduler - INFO - Receive client connection: Client-394ba582-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:25,058 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:25,095 - distributed.scheduler - INFO - Remove client Client-394ba582-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:25,096 - distributed.scheduler - INFO - Remove client Client-394ba582-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:25,096 - distributed.scheduler - INFO - Close client connection: Client-394ba582-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:25,096 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:39893
-2022-08-26 14:08:25,097 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:39893', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:25,097 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:39893
-2022-08-26 14:08:25,097 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:08:25,097 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-abd20cd6-87b1-4e0f-8d73-cde031745211 Address tcp://127.0.0.1:39893 Status: Status.closing
-2022-08-26 14:08:25,098 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:08:25,098 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:08:25,301 - distributed.utils_perf - WARNING - full garbage collections took 75% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_reschedule.py::test_cancelled_reschedule[long-running] 2022-08-26 14:08:25,307 - distributed.scheduler - INFO - State start
-2022-08-26 14:08:25,309 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:08:25,309 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:33387
-2022-08-26 14:08:25,309 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:45719
-2022-08-26 14:08:25,312 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40743
-2022-08-26 14:08:25,312 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40743
-2022-08-26 14:08:25,312 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:08:25,312 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33945
-2022-08-26 14:08:25,312 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33387
-2022-08-26 14:08:25,312 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:25,312 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:25,312 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:25,312 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-vc0x1_aa
-2022-08-26 14:08:25,312 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:25,314 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40743', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:25,314 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40743
-2022-08-26 14:08:25,315 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:25,315 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33387
-2022-08-26 14:08:25,315 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:25,315 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:25,328 - distributed.scheduler - INFO - Receive client connection: Client-3974ed6b-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:25,329 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:25,366 - distributed.scheduler - INFO - Remove client Client-3974ed6b-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:25,366 - distributed.scheduler - INFO - Remove client Client-3974ed6b-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:25,367 - distributed.scheduler - INFO - Close client connection: Client-3974ed6b-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:25,367 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:40743
-2022-08-26 14:08:25,368 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:40743', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:25,368 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:40743
-2022-08-26 14:08:25,368 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:08:25,368 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-357c06bb-5920-41c3-9479-e00b682f51e4 Address tcp://127.0.0.1:40743 Status: Status.closing
-2022-08-26 14:08:25,369 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:08:25,369 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:08:25,571 - distributed.utils_perf - WARNING - full garbage collections took 80% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_reschedule.py::test_cancelled_reschedule_worker_state[executing] PASSED
-distributed/tests/test_reschedule.py::test_cancelled_reschedule_worker_state[long-running] PASSED
-distributed/tests/test_reschedule.py::test_reschedule_releases[executing] PASSED
-distributed/tests/test_reschedule.py::test_reschedule_releases[long-running] PASSED
-distributed/tests/test_reschedule.py::test_reschedule_cancelled[executing] PASSED
-distributed/tests/test_reschedule.py::test_reschedule_cancelled[long-running] PASSED
-distributed/tests/test_reschedule.py::test_reschedule_resumed[executing] PASSED
-distributed/tests/test_reschedule.py::test_reschedule_resumed[long-running] PASSED
-distributed/tests/test_resources.py::test_resource_submit 2022-08-26 14:08:25,587 - distributed.scheduler - INFO - State start
-2022-08-26 14:08:25,589 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:08:25,589 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:41603
-2022-08-26 14:08:25,589 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:40833
-2022-08-26 14:08:25,594 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37869
-2022-08-26 14:08:25,594 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37869
-2022-08-26 14:08:25,594 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:08:25,594 - distributed.worker - INFO -          dashboard at:            127.0.0.1:38347
-2022-08-26 14:08:25,594 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:41603
-2022-08-26 14:08:25,594 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:25,594 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:25,594 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:25,594 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-3s4kc84a
-2022-08-26 14:08:25,594 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:25,595 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42039
-2022-08-26 14:08:25,595 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42039
-2022-08-26 14:08:25,595 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:08:25,595 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35715
-2022-08-26 14:08:25,595 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:41603
-2022-08-26 14:08:25,595 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:25,595 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:25,595 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:25,595 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-c3t_iy7b
-2022-08-26 14:08:25,595 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:25,598 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37869', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:25,598 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37869
-2022-08-26 14:08:25,598 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:25,598 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42039', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:25,599 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42039
-2022-08-26 14:08:25,599 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:25,599 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:41603
-2022-08-26 14:08:25,599 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:25,599 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:41603
-2022-08-26 14:08:25,599 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:25,599 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:25,600 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:25,613 - distributed.scheduler - INFO - Receive client connection: Client-39a05eb9-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:25,613 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:25,630 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:41033
-2022-08-26 14:08:25,630 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:41033
-2022-08-26 14:08:25,630 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39569
-2022-08-26 14:08:25,630 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:41603
-2022-08-26 14:08:25,630 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:25,630 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:08:25,630 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:25,630 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-pxq51w5a
-2022-08-26 14:08:25,631 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:25,632 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:41033', status: init, memory: 0, processing: 0>
-2022-08-26 14:08:25,633 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:41033
-2022-08-26 14:08:25,633 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:25,633 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:41603
-2022-08-26 14:08:25,633 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:25,633 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:25,636 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:41033
-2022-08-26 14:08:25,637 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-31026e5b-f0a9-4f6d-8b1f-f1de6a884773 Address tcp://127.0.0.1:41033 Status: Status.closing
-2022-08-26 14:08:25,637 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:41033', status: closing, memory: 1, processing: 0>
-2022-08-26 14:08:25,637 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:41033
-2022-08-26 14:08:25,648 - distributed.scheduler - INFO - Remove client Client-39a05eb9-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:25,648 - distributed.scheduler - INFO - Remove client Client-39a05eb9-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:25,649 - distributed.scheduler - INFO - Close client connection: Client-39a05eb9-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:25,649 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:37869
-2022-08-26 14:08:25,649 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42039
-2022-08-26 14:08:25,650 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:37869', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:25,650 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:37869
-2022-08-26 14:08:25,650 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:42039', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:25,651 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:42039
-2022-08-26 14:08:25,651 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:08:25,651 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-6b46a3e4-bec4-40b9-9c6e-8cdb310d03f2 Address tcp://127.0.0.1:37869 Status: Status.closing
-2022-08-26 14:08:25,651 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-9dfd9a6f-002a-4116-8924-5398ec525439 Address tcp://127.0.0.1:42039 Status: Status.closing
-2022-08-26 14:08:25,652 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:08:25,652 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:08:25,855 - distributed.utils_perf - WARNING - full garbage collections took 80% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_resources.py::test_submit_many_non_overlapping 2022-08-26 14:08:25,861 - distributed.scheduler - INFO - State start
-2022-08-26 14:08:25,863 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:08:25,863 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:37071
-2022-08-26 14:08:25,863 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:36013
-2022-08-26 14:08:25,868 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:44557
-2022-08-26 14:08:25,868 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:44557
-2022-08-26 14:08:25,868 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:08:25,868 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46679
-2022-08-26 14:08:25,868 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37071
-2022-08-26 14:08:25,868 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:25,868 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:25,868 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:25,868 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-3pzkmx5z
-2022-08-26 14:08:25,868 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:25,868 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38125
-2022-08-26 14:08:25,868 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38125
-2022-08-26 14:08:25,869 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:08:25,869 - distributed.worker - INFO -          dashboard at:            127.0.0.1:38071
-2022-08-26 14:08:25,869 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37071
-2022-08-26 14:08:25,869 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:25,869 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:25,869 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:25,869 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-6cah3zuq
-2022-08-26 14:08:25,869 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:25,872 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:44557', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:25,872 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:44557
-2022-08-26 14:08:25,872 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:25,873 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38125', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:25,873 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38125
-2022-08-26 14:08:25,873 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:25,873 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37071
-2022-08-26 14:08:25,873 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:25,873 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37071
-2022-08-26 14:08:25,873 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:25,874 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:25,874 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:25,887 - distributed.scheduler - INFO - Receive client connection: Client-39ca3662-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:25,888 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:25,909 - distributed.scheduler - INFO - Remove client Client-39ca3662-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:25,909 - distributed.scheduler - INFO - Remove client Client-39ca3662-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:25,910 - distributed.scheduler - INFO - Close client connection: Client-39ca3662-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:25,911 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:44557
-2022-08-26 14:08:25,911 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:38125
-2022-08-26 14:08:25,912 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:38125', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:25,912 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:38125
-2022-08-26 14:08:25,912 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-b4e52995-92d1-4806-84a4-01014f438a1d Address tcp://127.0.0.1:38125 Status: Status.closing
-2022-08-26 14:08:25,912 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-670858f6-115b-4e0b-a021-749fb6f5d0e3 Address tcp://127.0.0.1:44557 Status: Status.closing
-2022-08-26 14:08:25,913 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:44557', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:25,913 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:44557
-2022-08-26 14:08:25,913 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:08:25,914 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:08:25,914 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:08:26,117 - distributed.utils_perf - WARNING - full garbage collections took 80% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_resources.py::test_submit_many_non_overlapping_2 2022-08-26 14:08:26,123 - distributed.scheduler - INFO - State start
-2022-08-26 14:08:26,124 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:08:26,125 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:36401
-2022-08-26 14:08:26,125 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:46881
-2022-08-26 14:08:26,129 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34035
-2022-08-26 14:08:26,129 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34035
-2022-08-26 14:08:26,129 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:08:26,129 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45419
-2022-08-26 14:08:26,129 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36401
-2022-08-26 14:08:26,129 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:26,129 - distributed.worker - INFO -               Threads:                          4
-2022-08-26 14:08:26,129 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:26,130 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-w5wqyoxy
-2022-08-26 14:08:26,130 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:26,130 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38049
-2022-08-26 14:08:26,130 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38049
-2022-08-26 14:08:26,130 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:08:26,130 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34379
-2022-08-26 14:08:26,130 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36401
-2022-08-26 14:08:26,130 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:26,130 - distributed.worker - INFO -               Threads:                          4
-2022-08-26 14:08:26,131 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:26,131 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-hqtvosoh
-2022-08-26 14:08:26,131 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:26,133 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:34035', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:26,134 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:34035
-2022-08-26 14:08:26,134 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:26,134 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38049', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:26,134 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38049
-2022-08-26 14:08:26,134 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:26,135 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36401
-2022-08-26 14:08:26,135 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:26,135 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36401
-2022-08-26 14:08:26,135 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:26,135 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:26,135 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:26,149 - distributed.scheduler - INFO - Receive client connection: Client-39f224e0-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:26,149 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:27,002 - distributed.scheduler - INFO - Remove client Client-39f224e0-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:27,002 - distributed.scheduler - INFO - Remove client Client-39f224e0-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:27,002 - distributed.scheduler - INFO - Close client connection: Client-39f224e0-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:27,002 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:34035
-2022-08-26 14:08:27,003 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:38049
-2022-08-26 14:08:27,004 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:34035', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:27,004 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:34035
-2022-08-26 14:08:27,004 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:38049', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:27,004 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:38049
-2022-08-26 14:08:27,004 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:08:27,004 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-ac2d4ba8-0657-4e94-9f31-c788b9664b05 Address tcp://127.0.0.1:34035 Status: Status.closing
-2022-08-26 14:08:27,005 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-d9ccc57e-3227-4c17-b7f6-984ffc42a887 Address tcp://127.0.0.1:38049 Status: Status.closing
-2022-08-26 14:08:27,006 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:08:27,006 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:08:27,213 - distributed.utils_perf - WARNING - full garbage collections took 78% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_resources.py::test_move 2022-08-26 14:08:27,219 - distributed.scheduler - INFO - State start
-2022-08-26 14:08:27,220 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:08:27,220 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:33135
-2022-08-26 14:08:27,220 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:43695
-2022-08-26 14:08:27,225 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45767
-2022-08-26 14:08:27,225 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45767
-2022-08-26 14:08:27,225 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:08:27,225 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42457
-2022-08-26 14:08:27,225 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33135
-2022-08-26 14:08:27,225 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:27,225 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:27,225 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:27,225 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-7_pw2w_n
-2022-08-26 14:08:27,225 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:27,226 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37681
-2022-08-26 14:08:27,226 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37681
-2022-08-26 14:08:27,226 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:08:27,226 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35765
-2022-08-26 14:08:27,226 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33135
-2022-08-26 14:08:27,226 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:27,226 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:27,226 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:27,226 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-20im3e19
-2022-08-26 14:08:27,226 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:27,229 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45767', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:27,229 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45767
-2022-08-26 14:08:27,229 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:27,230 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37681', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:27,230 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37681
-2022-08-26 14:08:27,230 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:27,230 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33135
-2022-08-26 14:08:27,230 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:27,231 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33135
-2022-08-26 14:08:27,231 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:27,231 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:27,231 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:27,245 - distributed.scheduler - INFO - Receive client connection: Client-3a9952d4-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:27,245 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:27,267 - distributed.scheduler - INFO - Remove client Client-3a9952d4-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:27,268 - distributed.scheduler - INFO - Remove client Client-3a9952d4-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:27,268 - distributed.scheduler - INFO - Close client connection: Client-3a9952d4-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:27,269 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:45767
-2022-08-26 14:08:27,270 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:37681
-2022-08-26 14:08:27,270 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-bd0d2142-2fc8-4788-ab77-8e8440230c04 Address tcp://127.0.0.1:45767 Status: Status.closing
-2022-08-26 14:08:27,271 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-61250938-eee3-4f84-aa00-4428a696aeef Address tcp://127.0.0.1:37681 Status: Status.closing
-2022-08-26 14:08:27,271 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:45767', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:27,271 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:45767
-2022-08-26 14:08:27,271 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:37681', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:27,272 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:37681
-2022-08-26 14:08:27,272 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:08:27,272 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:08:27,273 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:08:27,477 - distributed.utils_perf - WARNING - full garbage collections took 78% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_resources.py::test_dont_work_steal 2022-08-26 14:08:27,484 - distributed.scheduler - INFO - State start
-2022-08-26 14:08:27,485 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:08:27,485 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:41675
-2022-08-26 14:08:27,485 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:40079
-2022-08-26 14:08:27,490 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37101
-2022-08-26 14:08:27,490 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37101
-2022-08-26 14:08:27,490 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:08:27,490 - distributed.worker - INFO -          dashboard at:            127.0.0.1:44075
-2022-08-26 14:08:27,490 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:41675
-2022-08-26 14:08:27,490 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:27,490 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:27,490 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:27,490 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ovvqkxrj
-2022-08-26 14:08:27,490 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:27,491 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:46343
-2022-08-26 14:08:27,491 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:46343
-2022-08-26 14:08:27,491 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:08:27,491 - distributed.worker - INFO -          dashboard at:            127.0.0.1:37963
-2022-08-26 14:08:27,491 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:41675
-2022-08-26 14:08:27,491 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:27,491 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:27,491 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:27,491 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-dzpwwi4e
-2022-08-26 14:08:27,491 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:27,494 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37101', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:27,494 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37101
-2022-08-26 14:08:27,494 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:27,495 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:46343', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:27,495 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:46343
-2022-08-26 14:08:27,495 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:27,495 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:41675
-2022-08-26 14:08:27,495 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:27,496 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:41675
-2022-08-26 14:08:27,496 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:27,496 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:27,496 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:27,510 - distributed.scheduler - INFO - Receive client connection: Client-3ac1c2ed-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:27,510 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:28,049 - distributed.scheduler - INFO - Remove client Client-3ac1c2ed-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:28,050 - distributed.scheduler - INFO - Remove client Client-3ac1c2ed-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:28,050 - distributed.scheduler - INFO - Close client connection: Client-3ac1c2ed-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:28,050 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:37101
-2022-08-26 14:08:28,051 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:46343
-2022-08-26 14:08:28,052 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:37101', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:28,052 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:37101
-2022-08-26 14:08:28,052 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:46343', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:28,052 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:46343
-2022-08-26 14:08:28,052 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:08:28,052 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-73f593a3-eadb-4251-8056-29c5a918de68 Address tcp://127.0.0.1:37101 Status: Status.closing
-2022-08-26 14:08:28,053 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-45ba940c-2271-47ae-8422-94660b0b2c19 Address tcp://127.0.0.1:46343 Status: Status.closing
-2022-08-26 14:08:28,054 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:08:28,054 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:08:28,260 - distributed.utils_perf - WARNING - full garbage collections took 78% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_resources.py::test_map 2022-08-26 14:08:28,265 - distributed.scheduler - INFO - State start
-2022-08-26 14:08:28,267 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:08:28,267 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:45027
-2022-08-26 14:08:28,267 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:44079
-2022-08-26 14:08:28,272 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36243
-2022-08-26 14:08:28,272 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36243
-2022-08-26 14:08:28,272 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:08:28,272 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43197
-2022-08-26 14:08:28,272 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:45027
-2022-08-26 14:08:28,272 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:28,272 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:28,272 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:28,272 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-s1bqf_6v
-2022-08-26 14:08:28,272 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:28,273 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36869
-2022-08-26 14:08:28,273 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36869
-2022-08-26 14:08:28,273 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:08:28,273 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33511
-2022-08-26 14:08:28,273 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:45027
-2022-08-26 14:08:28,273 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:28,273 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:28,273 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:28,273 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-s2159jfw
-2022-08-26 14:08:28,273 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:28,276 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36243', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:28,276 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36243
-2022-08-26 14:08:28,276 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:28,277 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36869', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:28,277 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36869
-2022-08-26 14:08:28,277 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:28,277 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:45027
-2022-08-26 14:08:28,277 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:28,278 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:45027
-2022-08-26 14:08:28,278 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:28,278 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:28,278 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:28,292 - distributed.scheduler - INFO - Receive client connection: Client-3b3913ff-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:28,292 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:28,325 - distributed.scheduler - INFO - Remove client Client-3b3913ff-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:28,325 - distributed.scheduler - INFO - Remove client Client-3b3913ff-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:28,325 - distributed.scheduler - INFO - Close client connection: Client-3b3913ff-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:28,325 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36243
-2022-08-26 14:08:28,326 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36869
-2022-08-26 14:08:28,327 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:36243', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:28,327 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:36243
-2022-08-26 14:08:28,327 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:36869', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:28,327 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:36869
-2022-08-26 14:08:28,327 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:08:28,327 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-4eac1028-7a19-4ad6-8236-98be470c9702 Address tcp://127.0.0.1:36243 Status: Status.closing
-2022-08-26 14:08:28,327 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-5bf61eb2-8ed1-4097-b092-067aee9e9f4a Address tcp://127.0.0.1:36869 Status: Status.closing
-2022-08-26 14:08:28,328 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:08:28,328 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:08:28,532 - distributed.utils_perf - WARNING - full garbage collections took 79% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_resources.py::test_persist 2022-08-26 14:08:28,538 - distributed.scheduler - INFO - State start
-2022-08-26 14:08:28,540 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:08:28,540 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:40305
-2022-08-26 14:08:28,540 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:38257
-2022-08-26 14:08:28,544 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36113
-2022-08-26 14:08:28,544 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36113
-2022-08-26 14:08:28,544 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:08:28,545 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43721
-2022-08-26 14:08:28,545 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:40305
-2022-08-26 14:08:28,545 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:28,545 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:28,545 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:28,545 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-bhno2611
-2022-08-26 14:08:28,545 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:28,545 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35553
-2022-08-26 14:08:28,545 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35553
-2022-08-26 14:08:28,545 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:08:28,546 - distributed.worker - INFO -          dashboard at:            127.0.0.1:44179
-2022-08-26 14:08:28,546 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:40305
-2022-08-26 14:08:28,546 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:28,546 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:28,546 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:28,546 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-rtuxxue8
-2022-08-26 14:08:28,546 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:28,549 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36113', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:28,549 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36113
-2022-08-26 14:08:28,549 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:28,549 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:35553', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:28,550 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:35553
-2022-08-26 14:08:28,550 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:28,550 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:40305
-2022-08-26 14:08:28,550 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:28,550 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:40305
-2022-08-26 14:08:28,550 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:28,551 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:28,551 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:28,564 - distributed.scheduler - INFO - Receive client connection: Client-3b62af72-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:28,564 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:28,586 - distributed.scheduler - INFO - Remove client Client-3b62af72-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:28,586 - distributed.scheduler - INFO - Remove client Client-3b62af72-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:28,587 - distributed.scheduler - INFO - Close client connection: Client-3b62af72-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:28,588 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36113
-2022-08-26 14:08:28,588 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:35553
-2022-08-26 14:08:28,589 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-5aeca393-9878-4809-8181-761ee6041bea Address tcp://127.0.0.1:36113 Status: Status.closing
-2022-08-26 14:08:28,589 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-1f2fdfc9-bd06-4867-8de2-b2a9b89c136e Address tcp://127.0.0.1:35553 Status: Status.closing
-2022-08-26 14:08:28,590 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:36113', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:28,590 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:36113
-2022-08-26 14:08:28,590 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:35553', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:28,590 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:35553
-2022-08-26 14:08:28,590 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:08:28,591 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:08:28,591 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:08:28,795 - distributed.utils_perf - WARNING - full garbage collections took 79% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_resources.py::test_compute 2022-08-26 14:08:28,801 - distributed.scheduler - INFO - State start
-2022-08-26 14:08:28,803 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:08:28,803 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:45945
-2022-08-26 14:08:28,803 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:36627
-2022-08-26 14:08:28,807 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:44613
-2022-08-26 14:08:28,807 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:44613
-2022-08-26 14:08:28,807 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:08:28,807 - distributed.worker - INFO -          dashboard at:            127.0.0.1:37067
-2022-08-26 14:08:28,807 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:45945
-2022-08-26 14:08:28,807 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:28,808 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:28,808 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:28,808 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-dxs0pm13
-2022-08-26 14:08:28,808 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:28,808 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43995
-2022-08-26 14:08:28,808 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43995
-2022-08-26 14:08:28,808 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:08:28,808 - distributed.worker - INFO -          dashboard at:            127.0.0.1:37427
-2022-08-26 14:08:28,808 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:45945
-2022-08-26 14:08:28,808 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:28,809 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:28,809 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:28,809 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-oh3yxxp2
-2022-08-26 14:08:28,809 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:28,812 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:44613', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:28,812 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:44613
-2022-08-26 14:08:28,812 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:28,812 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:43995', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:28,813 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:43995
-2022-08-26 14:08:28,813 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:28,813 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:45945
-2022-08-26 14:08:28,813 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:28,813 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:45945
-2022-08-26 14:08:28,813 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:28,813 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:28,814 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:28,827 - distributed.scheduler - INFO - Receive client connection: Client-3b8aca49-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:28,827 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:28,872 - distributed.scheduler - INFO - Remove client Client-3b8aca49-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:28,872 - distributed.scheduler - INFO - Remove client Client-3b8aca49-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:28,872 - distributed.scheduler - INFO - Close client connection: Client-3b8aca49-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:28,872 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:44613
-2022-08-26 14:08:28,873 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:43995
-2022-08-26 14:08:28,874 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:44613', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:28,874 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:44613
-2022-08-26 14:08:28,874 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:43995', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:28,874 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:43995
-2022-08-26 14:08:28,874 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:08:28,874 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-e943e925-4a54-4dc5-a054-b9f1cc126ae2 Address tcp://127.0.0.1:44613 Status: Status.closing
-2022-08-26 14:08:28,874 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-314b7dd3-f197-47bc-9131-a2393d90be37 Address tcp://127.0.0.1:43995 Status: Status.closing
-2022-08-26 14:08:28,876 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:08:28,876 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:08:29,080 - distributed.utils_perf - WARNING - full garbage collections took 79% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_resources.py::test_get 2022-08-26 14:08:29,086 - distributed.scheduler - INFO - State start
-2022-08-26 14:08:29,087 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:08:29,088 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:44787
-2022-08-26 14:08:29,088 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:38359
-2022-08-26 14:08:29,092 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35793
-2022-08-26 14:08:29,092 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35793
-2022-08-26 14:08:29,092 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:08:29,092 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42759
-2022-08-26 14:08:29,092 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44787
-2022-08-26 14:08:29,092 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:29,092 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:29,092 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:29,093 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ks0xs7yx
-2022-08-26 14:08:29,093 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:29,093 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39983
-2022-08-26 14:08:29,093 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39983
-2022-08-26 14:08:29,093 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:08:29,093 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40403
-2022-08-26 14:08:29,093 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44787
-2022-08-26 14:08:29,093 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:29,093 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:29,093 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:29,094 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-fovutah4
-2022-08-26 14:08:29,094 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:29,096 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:35793', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:29,097 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:35793
-2022-08-26 14:08:29,097 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:29,097 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39983', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:29,097 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39983
-2022-08-26 14:08:29,097 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:29,098 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44787
-2022-08-26 14:08:29,098 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:29,098 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44787
-2022-08-26 14:08:29,098 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:29,098 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:29,098 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:29,112 - distributed.scheduler - INFO - Receive client connection: Client-3bb63f44-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:29,112 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:29,134 - distributed.scheduler - INFO - Remove client Client-3bb63f44-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:29,134 - distributed.scheduler - INFO - Remove client Client-3bb63f44-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:29,135 - distributed.scheduler - INFO - Close client connection: Client-3bb63f44-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:29,136 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:35793
-2022-08-26 14:08:29,136 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:39983
-2022-08-26 14:08:29,137 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:39983', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:29,137 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:39983
-2022-08-26 14:08:29,137 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-337788b2-579c-4965-9278-fc26a7697eb4 Address tcp://127.0.0.1:39983 Status: Status.closing
-2022-08-26 14:08:29,137 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-c7a25974-52d3-420a-8aa5-b49a34c502ee Address tcp://127.0.0.1:35793 Status: Status.closing
-2022-08-26 14:08:29,138 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:35793', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:29,138 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:35793
-2022-08-26 14:08:29,138 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:08:29,139 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:08:29,139 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:08:29,343 - distributed.utils_perf - WARNING - full garbage collections took 80% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_resources.py::test_persist_multiple_collections 2022-08-26 14:08:29,348 - distributed.scheduler - INFO - State start
-2022-08-26 14:08:29,350 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:08:29,350 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:41391
-2022-08-26 14:08:29,350 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:45047
-2022-08-26 14:08:29,355 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36713
-2022-08-26 14:08:29,355 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36713
-2022-08-26 14:08:29,355 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:08:29,355 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43181
-2022-08-26 14:08:29,355 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:41391
-2022-08-26 14:08:29,355 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:29,355 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:29,355 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:29,355 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-alrcs2v9
-2022-08-26 14:08:29,355 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:29,356 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42071
-2022-08-26 14:08:29,356 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42071
-2022-08-26 14:08:29,356 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:08:29,356 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41857
-2022-08-26 14:08:29,356 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:41391
-2022-08-26 14:08:29,356 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:29,356 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:29,356 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:29,356 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ldsfe9v1
-2022-08-26 14:08:29,356 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:29,359 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36713', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:29,359 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36713
-2022-08-26 14:08:29,359 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:29,360 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42071', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:29,360 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42071
-2022-08-26 14:08:29,360 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:29,360 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:41391
-2022-08-26 14:08:29,360 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:29,361 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:41391
-2022-08-26 14:08:29,361 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:29,361 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:29,361 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:29,375 - distributed.scheduler - INFO - Receive client connection: Client-3bde565c-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:29,375 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:29,396 - distributed.scheduler - INFO - Remove client Client-3bde565c-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:29,397 - distributed.scheduler - INFO - Remove client Client-3bde565c-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:29,397 - distributed.scheduler - INFO - Close client connection: Client-3bde565c-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:29,397 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36713
-2022-08-26 14:08:29,397 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42071
-2022-08-26 14:08:29,399 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:36713', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:29,399 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:36713
-2022-08-26 14:08:29,399 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:42071', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:29,399 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:42071
-2022-08-26 14:08:29,399 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:08:29,399 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-6a2243c7-897f-4a81-8c1e-92fffc616548 Address tcp://127.0.0.1:36713 Status: Status.closing
-2022-08-26 14:08:29,399 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-99d13c49-6fdb-4e6f-9801-fb2d76860278 Address tcp://127.0.0.1:42071 Status: Status.closing
-2022-08-26 14:08:29,400 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:08:29,400 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:08:29,604 - distributed.utils_perf - WARNING - full garbage collections took 80% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_resources.py::test_resources_str 2022-08-26 14:08:29,610 - distributed.scheduler - INFO - State start
-2022-08-26 14:08:29,611 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:08:29,612 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:39967
-2022-08-26 14:08:29,612 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:43359
-2022-08-26 14:08:29,616 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36055
-2022-08-26 14:08:29,616 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36055
-2022-08-26 14:08:29,616 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:08:29,616 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35195
-2022-08-26 14:08:29,616 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39967
-2022-08-26 14:08:29,616 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:29,616 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:29,616 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:29,617 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ehi6ytdd
-2022-08-26 14:08:29,617 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:29,617 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33151
-2022-08-26 14:08:29,617 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33151
-2022-08-26 14:08:29,617 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:08:29,617 - distributed.worker - INFO -          dashboard at:            127.0.0.1:32775
-2022-08-26 14:08:29,617 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39967
-2022-08-26 14:08:29,617 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:29,617 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:08:29,617 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:29,618 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-yws1j6sh
-2022-08-26 14:08:29,618 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:29,620 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36055', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:29,621 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36055
-2022-08-26 14:08:29,621 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:29,621 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33151', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:29,621 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33151
-2022-08-26 14:08:29,621 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:29,622 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39967
-2022-08-26 14:08:29,622 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:29,622 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39967
-2022-08-26 14:08:29,622 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:29,622 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:29,622 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:29,636 - distributed.scheduler - INFO - Receive client connection: Client-3c0636ee-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:29,636 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:29,658 - distributed.scheduler - INFO - Remove client Client-3c0636ee-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:29,658 - distributed.scheduler - INFO - Remove client Client-3c0636ee-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:29,659 - distributed.scheduler - INFO - Close client connection: Client-3c0636ee-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:29,659 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36055
-2022-08-26 14:08:29,659 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:33151
-2022-08-26 14:08:29,660 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:36055', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:29,660 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:36055
-2022-08-26 14:08:29,661 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:33151', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:29,661 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:33151
-2022-08-26 14:08:29,661 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:08:29,661 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-88c018de-06fb-4388-a5c7-ebed15988a44 Address tcp://127.0.0.1:36055 Status: Status.closing
-2022-08-26 14:08:29,661 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-48f5ceec-974b-4a64-ab81-491a7ae116d2 Address tcp://127.0.0.1:33151 Status: Status.closing
-2022-08-26 14:08:29,662 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:08:29,662 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:08:29,866 - distributed.utils_perf - WARNING - full garbage collections took 80% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_resources.py::test_minimum_resource 2022-08-26 14:08:29,872 - distributed.scheduler - INFO - State start
-2022-08-26 14:08:29,874 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:08:29,874 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:42299
-2022-08-26 14:08:29,874 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:40339
-2022-08-26 14:08:29,877 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43173
-2022-08-26 14:08:29,877 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43173
-2022-08-26 14:08:29,877 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:08:29,877 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43679
-2022-08-26 14:08:29,877 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:42299
-2022-08-26 14:08:29,877 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:29,877 - distributed.worker - INFO -               Threads:                          4
-2022-08-26 14:08:29,877 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:29,877 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-szj8jqt9
-2022-08-26 14:08:29,877 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:29,879 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:43173', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:29,879 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:43173
-2022-08-26 14:08:29,879 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:29,880 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:42299
-2022-08-26 14:08:29,880 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:29,880 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:29,893 - distributed.scheduler - INFO - Receive client connection: Client-3c2d7444-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:29,893 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:30,559 - distributed.scheduler - INFO - Remove client Client-3c2d7444-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:30,559 - distributed.scheduler - INFO - Remove client Client-3c2d7444-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:30,560 - distributed.scheduler - INFO - Close client connection: Client-3c2d7444-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:30,560 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:43173
-2022-08-26 14:08:30,561 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:43173', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:30,561 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:43173
-2022-08-26 14:08:30,561 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:08:30,561 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-e00f565b-16bd-4a2d-b233-cf77229c93c0 Address tcp://127.0.0.1:43173 Status: Status.closing
-2022-08-26 14:08:30,562 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:08:30,562 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:08:30,766 - distributed.utils_perf - WARNING - full garbage collections took 80% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_resources.py::test_constrained_vs_ready_priority_1[1-0-y-False] PASSED
-distributed/tests/test_resources.py::test_constrained_vs_ready_priority_1[1-0-y-True] PASSED
-distributed/tests/test_resources.py::test_constrained_vs_ready_priority_1[0-1-x-False] PASSED
-distributed/tests/test_resources.py::test_constrained_vs_ready_priority_1[0-1-x-True] PASSED
-distributed/tests/test_resources.py::test_constrained_vs_ready_priority_2[1-0-y-False] PASSED
-distributed/tests/test_resources.py::test_constrained_vs_ready_priority_2[1-0-y-True] PASSED
-distributed/tests/test_resources.py::test_constrained_vs_ready_priority_2[0-1-x-False] PASSED
-distributed/tests/test_resources.py::test_constrained_vs_ready_priority_2[0-1-x-True] PASSED
-distributed/tests/test_resources.py::test_constrained_tasks_respect_priority PASSED
-distributed/tests/test_resources.py::test_task_cancelled_and_readded_with_resources PASSED
-distributed/tests/test_resources.py::test_balance_resources SKIPPED
-distributed/tests/test_resources.py::test_set_resources 2022-08-26 14:08:30,787 - distributed.scheduler - INFO - State start
-2022-08-26 14:08:30,789 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:08:30,789 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:37543
-2022-08-26 14:08:30,789 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:35789
-2022-08-26 14:08:30,792 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:41799
-2022-08-26 14:08:30,792 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:41799
-2022-08-26 14:08:30,792 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:08:30,792 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46165
-2022-08-26 14:08:30,792 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37543
-2022-08-26 14:08:30,792 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:30,792 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:08:30,792 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:30,792 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-7rjpsj0m
-2022-08-26 14:08:30,792 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:30,794 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:41799', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:30,795 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:41799
-2022-08-26 14:08:30,795 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:30,795 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37543
-2022-08-26 14:08:30,795 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:30,795 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:30,809 - distributed.scheduler - INFO - Receive client connection: Client-3cb92376-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:30,809 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:30,831 - distributed.scheduler - INFO - Remove client Client-3cb92376-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:30,831 - distributed.scheduler - INFO - Remove client Client-3cb92376-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:30,831 - distributed.scheduler - INFO - Close client connection: Client-3cb92376-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:30,832 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:41799
-2022-08-26 14:08:30,833 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-41fe636f-f3e4-4ae3-ab58-0cf133229d49 Address tcp://127.0.0.1:41799 Status: Status.closing
-2022-08-26 14:08:30,833 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:41799', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:30,833 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:41799
-2022-08-26 14:08:30,833 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:08:30,834 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:08:30,834 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:08:31,038 - distributed.utils_perf - WARNING - full garbage collections took 80% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_resources.py::test_persist_collections 2022-08-26 14:08:31,043 - distributed.scheduler - INFO - State start
-2022-08-26 14:08:31,045 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:08:31,045 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:42521
-2022-08-26 14:08:31,045 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:43541
-2022-08-26 14:08:31,050 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:41561
-2022-08-26 14:08:31,050 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:41561
-2022-08-26 14:08:31,050 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:08:31,050 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41985
-2022-08-26 14:08:31,050 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:42521
-2022-08-26 14:08:31,050 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:31,050 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:31,050 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:31,050 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-k7y5fmxz
-2022-08-26 14:08:31,050 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:31,051 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:32943
-2022-08-26 14:08:31,051 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:32943
-2022-08-26 14:08:31,051 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:08:31,051 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41683
-2022-08-26 14:08:31,051 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:42521
-2022-08-26 14:08:31,051 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:31,051 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:31,051 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:31,051 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-a2cudjxr
-2022-08-26 14:08:31,051 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:31,054 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:41561', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:31,054 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:41561
-2022-08-26 14:08:31,054 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:31,055 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:32943', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:31,055 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:32943
-2022-08-26 14:08:31,055 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:31,055 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:42521
-2022-08-26 14:08:31,055 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:31,056 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:42521
-2022-08-26 14:08:31,056 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:31,056 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:31,056 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:31,070 - distributed.scheduler - INFO - Receive client connection: Client-3ce0f8db-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:31,070 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:31,125 - distributed.scheduler - INFO - Remove client Client-3ce0f8db-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:31,125 - distributed.scheduler - INFO - Remove client Client-3ce0f8db-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:31,125 - distributed.scheduler - INFO - Close client connection: Client-3ce0f8db-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:31,126 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:41561
-2022-08-26 14:08:31,126 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:32943
-2022-08-26 14:08:31,127 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:41561', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:31,127 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:41561
-2022-08-26 14:08:31,127 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:32943', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:31,127 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:32943
-2022-08-26 14:08:31,127 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:08:31,127 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-51fbcee2-b9c0-46c1-bcc6-c885478b099b Address tcp://127.0.0.1:41561 Status: Status.closing
-2022-08-26 14:08:31,128 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-fae4699d-fe68-4573-ac6d-02c3cfb690c1 Address tcp://127.0.0.1:32943 Status: Status.closing
-2022-08-26 14:08:31,129 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:08:31,129 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:08:31,333 - distributed.utils_perf - WARNING - full garbage collections took 80% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_resources.py::test_dont_optimize_out SKIPPED
-distributed/tests/test_resources.py::test_full_collections SKIPPED (...)
-distributed/tests/test_resources.py::test_collections_get[True] 2022-08-26 14:08:32,202 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:08:32,205 - distributed.scheduler - INFO - State start
-2022-08-26 14:08:32,208 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:08:32,208 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:42183
-2022-08-26 14:08:32,208 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:08:32,224 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38877
-2022-08-26 14:08:32,224 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38877
-2022-08-26 14:08:32,225 - distributed.worker - INFO -          dashboard at:            127.0.0.1:37613
-2022-08-26 14:08:32,225 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:42183
-2022-08-26 14:08:32,225 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:32,225 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:32,225 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:32,225 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ip1l6wqe
-2022-08-26 14:08:32,225 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:32,261 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45037
-2022-08-26 14:08:32,261 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45037
-2022-08-26 14:08:32,261 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40353
-2022-08-26 14:08:32,261 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:42183
-2022-08-26 14:08:32,261 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:32,261 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:32,261 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:32,261 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-se7e9f4c
-2022-08-26 14:08:32,261 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:32,512 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38877', status: init, memory: 0, processing: 0>
-2022-08-26 14:08:32,774 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38877
-2022-08-26 14:08:32,774 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:32,774 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:42183
-2022-08-26 14:08:32,774 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:32,775 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45037', status: init, memory: 0, processing: 0>
-2022-08-26 14:08:32,775 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45037
-2022-08-26 14:08:32,775 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:32,775 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:32,775 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:42183
-2022-08-26 14:08:32,775 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:32,776 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:32,781 - distributed.scheduler - INFO - Receive client connection: Client-3de610df-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:32,781 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:32,785 - distributed.worker - INFO - Run out-of-band function 'f'
-2022-08-26 14:08:32,821 - distributed.worker - INFO - Run out-of-band function 'g'
-2022-08-26 14:08:32,821 - distributed.worker - INFO - Run out-of-band function 'g'
-/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py:3498: FutureWarning: The `Worker.log` attribute has been moved to `Worker.state.log`
-  warnings.warn(
-/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py:3498: FutureWarning: The `Worker.log` attribute has been moved to `Worker.state.log`
-  warnings.warn(
-XFAIL2022-08-26 14:08:32,837 - distributed.scheduler - INFO - Remove client Client-3de610df-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:32,837 - distributed.scheduler - INFO - Remove client Client-3de610df-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:32,837 - distributed.scheduler - INFO - Close client connection: Client-3de610df-2583-11ed-a99d-00d861bc4509
-
-distributed/tests/test_resources.py::test_collections_get[False] 2022-08-26 14:08:33,719 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:08:33,722 - distributed.scheduler - INFO - State start
-2022-08-26 14:08:33,725 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:08:33,725 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:34817
-2022-08-26 14:08:33,725 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:08:33,729 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-ip1l6wqe', purging
-2022-08-26 14:08:33,730 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-se7e9f4c', purging
-2022-08-26 14:08:33,736 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:46673
-2022-08-26 14:08:33,736 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:46673
-2022-08-26 14:08:33,736 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35519
-2022-08-26 14:08:33,736 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34817
-2022-08-26 14:08:33,736 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:33,736 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:33,736 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:33,736 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-p6rrtocc
-2022-08-26 14:08:33,736 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:33,774 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40603
-2022-08-26 14:08:33,774 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40603
-2022-08-26 14:08:33,774 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46603
-2022-08-26 14:08:33,774 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34817
-2022-08-26 14:08:33,774 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:33,774 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:33,774 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:33,774 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-x5hjn15l
-2022-08-26 14:08:33,774 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:34,020 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:46673', status: init, memory: 0, processing: 0>
-2022-08-26 14:08:34,279 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:46673
-2022-08-26 14:08:34,280 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:34,280 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34817
-2022-08-26 14:08:34,280 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:34,280 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40603', status: init, memory: 0, processing: 0>
-2022-08-26 14:08:34,281 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40603
-2022-08-26 14:08:34,281 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:34,281 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:34,281 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34817
-2022-08-26 14:08:34,281 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:34,282 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:34,287 - distributed.scheduler - INFO - Receive client connection: Client-3ecbcf0c-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:34,287 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:34,291 - distributed.worker - INFO - Run out-of-band function 'f'
-2022-08-26 14:08:34,324 - distributed.worker - INFO - Run out-of-band function 'g'
-/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py:3498: FutureWarning: The `Worker.log` attribute has been moved to `Worker.state.log`
-  warnings.warn(
-2022-08-26 14:08:34,325 - distributed.worker - INFO - Run out-of-band function 'g'
-/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py:3498: FutureWarning: The `Worker.log` attribute has been moved to `Worker.state.log`
-  warnings.warn(
-PASSED2022-08-26 14:08:34,334 - distributed.scheduler - INFO - Remove client Client-3ecbcf0c-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:34,334 - distributed.scheduler - INFO - Remove client Client-3ecbcf0c-2583-11ed-a99d-00d861bc4509
-
-distributed/tests/test_resources.py::test_resources_from_config 2022-08-26 14:08:34,346 - distributed.scheduler - INFO - State start
-2022-08-26 14:08:34,348 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:08:34,348 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:34209
-2022-08-26 14:08:34,348 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:44471
-2022-08-26 14:08:34,349 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-x5hjn15l', purging
-2022-08-26 14:08:34,349 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-p6rrtocc', purging
-2022-08-26 14:08:34,353 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33337
-2022-08-26 14:08:34,353 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33337
-2022-08-26 14:08:34,353 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:08:34,353 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42837
-2022-08-26 14:08:34,353 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34209
-2022-08-26 14:08:34,353 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:34,353 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:34,353 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:34,353 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-tp6jw9p6
-2022-08-26 14:08:34,353 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:34,354 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:41477
-2022-08-26 14:08:34,354 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:41477
-2022-08-26 14:08:34,354 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:08:34,354 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36583
-2022-08-26 14:08:34,354 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34209
-2022-08-26 14:08:34,354 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:34,354 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:08:34,354 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:34,354 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ppx5zzy0
-2022-08-26 14:08:34,354 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:34,357 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33337', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:34,357 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33337
-2022-08-26 14:08:34,357 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:34,358 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:41477', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:34,358 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:41477
-2022-08-26 14:08:34,358 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:34,358 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34209
-2022-08-26 14:08:34,358 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:34,359 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34209
-2022-08-26 14:08:34,359 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:34,359 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:34,359 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:34,373 - distributed.scheduler - INFO - Receive client connection: Client-3ed8f25f-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:34,373 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:34,384 - distributed.scheduler - INFO - Remove client Client-3ed8f25f-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:34,384 - distributed.scheduler - INFO - Remove client Client-3ed8f25f-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:34,384 - distributed.scheduler - INFO - Close client connection: Client-3ed8f25f-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:34,385 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:33337
-2022-08-26 14:08:34,385 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:41477
-2022-08-26 14:08:34,386 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:33337', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:34,386 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:33337
-2022-08-26 14:08:34,386 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:41477', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:34,386 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:41477
-2022-08-26 14:08:34,386 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:08:34,386 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-e1287394-38a1-4caf-a721-9e17253ef874 Address tcp://127.0.0.1:33337 Status: Status.closing
-2022-08-26 14:08:34,387 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-6a03844d-e48b-4902-b1f5-25d232ad23f2 Address tcp://127.0.0.1:41477 Status: Status.closing
-2022-08-26 14:08:34,387 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:08:34,388 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:08:34,592 - distributed.utils_perf - WARNING - full garbage collections took 79% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_resources.py::test_resources_from_python_override_config 2022-08-26 14:08:34,598 - distributed.scheduler - INFO - State start
-2022-08-26 14:08:34,600 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:08:34,600 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:40385
-2022-08-26 14:08:34,600 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:39813
-2022-08-26 14:08:34,604 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40951
-2022-08-26 14:08:34,604 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40951
-2022-08-26 14:08:34,604 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:08:34,604 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33425
-2022-08-26 14:08:34,604 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:40385
-2022-08-26 14:08:34,605 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:34,605 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:34,605 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:34,605 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-iq5arm9r
-2022-08-26 14:08:34,605 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:34,605 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45743
-2022-08-26 14:08:34,605 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45743
-2022-08-26 14:08:34,605 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:08:34,605 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35285
-2022-08-26 14:08:34,605 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:40385
-2022-08-26 14:08:34,605 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:34,605 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:08:34,606 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:34,606 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-csii5b9q
-2022-08-26 14:08:34,606 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:34,608 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40951', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:34,609 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40951
-2022-08-26 14:08:34,609 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:34,609 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45743', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:34,609 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45743
-2022-08-26 14:08:34,609 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:34,610 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:40385
-2022-08-26 14:08:34,610 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:34,610 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:40385
-2022-08-26 14:08:34,610 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:34,610 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:34,610 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:34,624 - distributed.scheduler - INFO - Receive client connection: Client-3eff47f6-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:34,624 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:34,635 - distributed.scheduler - INFO - Remove client Client-3eff47f6-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:34,635 - distributed.scheduler - INFO - Remove client Client-3eff47f6-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:34,636 - distributed.scheduler - INFO - Close client connection: Client-3eff47f6-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:34,636 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:40951
-2022-08-26 14:08:34,636 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:45743
-2022-08-26 14:08:34,637 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:40951', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:34,637 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:40951
-2022-08-26 14:08:34,637 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:45743', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:34,637 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:45743
-2022-08-26 14:08:34,638 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:08:34,638 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-9b7ce911-2168-43ee-8924-332e52ff954a Address tcp://127.0.0.1:40951 Status: Status.closing
-2022-08-26 14:08:34,638 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-cba9e39b-69d0-448e-9ea7-f20c90ad55e1 Address tcp://127.0.0.1:45743 Status: Status.closing
-2022-08-26 14:08:34,639 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:08:34,639 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:08:34,842 - distributed.utils_perf - WARNING - full garbage collections took 79% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_resources.py::test_cancelled_with_resources[executing-ExecuteSuccessEvent] PASSED
-distributed/tests/test_resources.py::test_cancelled_with_resources[executing-ExecuteFailureEvent] PASSED
-distributed/tests/test_resources.py::test_cancelled_with_resources[executing-RescheduleEvent] PASSED
-distributed/tests/test_resources.py::test_cancelled_with_resources[long-running-ExecuteSuccessEvent] PASSED
-distributed/tests/test_resources.py::test_cancelled_with_resources[long-running-ExecuteFailureEvent] PASSED
-distributed/tests/test_resources.py::test_cancelled_with_resources[long-running-RescheduleEvent] PASSED
-distributed/tests/test_resources.py::test_resumed_with_resources[executing-ExecuteSuccessEvent] PASSED
-distributed/tests/test_resources.py::test_resumed_with_resources[executing-ExecuteFailureEvent] PASSED
-distributed/tests/test_resources.py::test_resumed_with_resources[executing-RescheduleEvent] PASSED
-distributed/tests/test_resources.py::test_resumed_with_resources[long-running-ExecuteSuccessEvent] PASSED
-distributed/tests/test_resources.py::test_resumed_with_resources[long-running-ExecuteFailureEvent] PASSED
-distributed/tests/test_resources.py::test_resumed_with_resources[long-running-RescheduleEvent] PASSED
-distributed/tests/test_resources.py::test_resumed_with_different_resources[executing-ExecuteSuccessEvent] PASSED
-distributed/tests/test_resources.py::test_resumed_with_different_resources[executing-ExecuteFailureEvent] PASSED
-distributed/tests/test_resources.py::test_resumed_with_different_resources[executing-RescheduleEvent] PASSED
-distributed/tests/test_resources.py::test_resumed_with_different_resources[long-running-ExecuteSuccessEvent] PASSED
-distributed/tests/test_resources.py::test_resumed_with_different_resources[long-running-ExecuteFailureEvent] PASSED
-distributed/tests/test_resources.py::test_resumed_with_different_resources[long-running-RescheduleEvent] PASSED
-distributed/tests/test_scheduler.py::test_administration 2022-08-26 14:08:34,872 - distributed.scheduler - INFO - State start
-2022-08-26 14:08:34,874 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:08:34,874 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:41857
-2022-08-26 14:08:34,874 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:39135
-2022-08-26 14:08:34,879 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38207
-2022-08-26 14:08:34,879 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38207
-2022-08-26 14:08:34,879 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:08:34,879 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45225
-2022-08-26 14:08:34,879 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:41857
-2022-08-26 14:08:34,879 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:34,879 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:34,879 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:34,879 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-du_aeze8
-2022-08-26 14:08:34,879 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:34,879 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34947
-2022-08-26 14:08:34,880 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34947
-2022-08-26 14:08:34,880 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:08:34,880 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42533
-2022-08-26 14:08:34,880 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:41857
-2022-08-26 14:08:34,880 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:34,880 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:08:34,880 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:34,880 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-so5rdat_
-2022-08-26 14:08:34,880 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:34,883 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38207', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:34,883 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38207
-2022-08-26 14:08:34,883 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:34,883 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:34947', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:34,884 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:34947
-2022-08-26 14:08:34,884 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:34,884 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:41857
-2022-08-26 14:08:34,884 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:34,884 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:41857
-2022-08-26 14:08:34,884 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:34,885 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:34,885 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:34,896 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:38207
-2022-08-26 14:08:34,896 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:34947
-2022-08-26 14:08:34,897 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:38207', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:34,897 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:38207
-2022-08-26 14:08:34,897 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:34947', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:34,897 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:34947
-2022-08-26 14:08:34,897 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:08:34,897 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-40b94cfc-3b90-4fe5-8447-3c1bcecaf662 Address tcp://127.0.0.1:38207 Status: Status.closing
-2022-08-26 14:08:34,898 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-813844f4-9b9b-45e3-9610-3dcdd2b9a940 Address tcp://127.0.0.1:34947 Status: Status.closing
-2022-08-26 14:08:34,898 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:08:34,899 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:08:35,103 - distributed.utils_perf - WARNING - full garbage collections took 80% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_respect_data_in_memory 2022-08-26 14:08:35,108 - distributed.scheduler - INFO - State start
-2022-08-26 14:08:35,110 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:08:35,110 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:34947
-2022-08-26 14:08:35,110 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:33237
-2022-08-26 14:08:35,113 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45483
-2022-08-26 14:08:35,113 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45483
-2022-08-26 14:08:35,113 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:08:35,113 - distributed.worker - INFO -          dashboard at:            127.0.0.1:38943
-2022-08-26 14:08:35,113 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34947
-2022-08-26 14:08:35,113 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:35,113 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:35,113 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:35,113 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-1kc4hyge
-2022-08-26 14:08:35,113 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:35,115 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45483', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:35,115 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45483
-2022-08-26 14:08:35,116 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:35,116 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34947
-2022-08-26 14:08:35,116 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:35,116 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:35,129 - distributed.scheduler - INFO - Receive client connection: Client-3f4c6ebc-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:35,130 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:35,163 - distributed.scheduler - INFO - Remove client Client-3f4c6ebc-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:35,163 - distributed.scheduler - INFO - Remove client Client-3f4c6ebc-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:35,164 - distributed.scheduler - INFO - Close client connection: Client-3f4c6ebc-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:35,164 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:45483
-2022-08-26 14:08:35,165 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:45483', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:35,165 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:45483
-2022-08-26 14:08:35,165 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:08:35,165 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-4fa9d9d0-8583-4e4c-a8ad-08f1c738fcc3 Address tcp://127.0.0.1:45483 Status: Status.closing
-2022-08-26 14:08:35,166 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:08:35,166 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:08:35,369 - distributed.utils_perf - WARNING - full garbage collections took 80% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_recompute_released_results 2022-08-26 14:08:35,375 - distributed.scheduler - INFO - State start
-2022-08-26 14:08:35,377 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:08:35,377 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:39233
-2022-08-26 14:08:35,377 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:34495
-2022-08-26 14:08:35,381 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45925
-2022-08-26 14:08:35,382 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45925
-2022-08-26 14:08:35,382 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:08:35,382 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46709
-2022-08-26 14:08:35,382 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39233
-2022-08-26 14:08:35,382 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:35,382 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:35,382 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:35,382 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-t5alvwrt
-2022-08-26 14:08:35,382 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:35,382 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40627
-2022-08-26 14:08:35,382 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40627
-2022-08-26 14:08:35,382 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:08:35,383 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33589
-2022-08-26 14:08:35,383 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39233
-2022-08-26 14:08:35,383 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:35,383 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:08:35,383 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:35,383 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-eel7glyo
-2022-08-26 14:08:35,383 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:35,386 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45925', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:35,386 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45925
-2022-08-26 14:08:35,386 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:35,386 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40627', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:35,387 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40627
-2022-08-26 14:08:35,387 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:35,387 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39233
-2022-08-26 14:08:35,387 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:35,387 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39233
-2022-08-26 14:08:35,387 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:35,387 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:35,388 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:35,401 - distributed.scheduler - INFO - Receive client connection: Client-3f75e56c-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:35,401 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:35,442 - distributed.scheduler - INFO - Remove client Client-3f75e56c-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:35,442 - distributed.scheduler - INFO - Remove client Client-3f75e56c-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:35,442 - distributed.scheduler - INFO - Close client connection: Client-3f75e56c-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:35,443 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:45925
-2022-08-26 14:08:35,444 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:40627
-2022-08-26 14:08:35,444 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:45925', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:35,445 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:45925
-2022-08-26 14:08:35,445 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-0df9e42e-ae7a-4648-9efa-6388290932ae Address tcp://127.0.0.1:45925 Status: Status.closing
-2022-08-26 14:08:35,445 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-46804d16-d669-49b5-b061-88eacf3bc7b6 Address tcp://127.0.0.1:40627 Status: Status.closing
-2022-08-26 14:08:35,446 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:40627', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:35,446 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:40627
-2022-08-26 14:08:35,446 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:08:35,446 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:08:35,447 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:08:35,651 - distributed.utils_perf - WARNING - full garbage collections took 80% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_decide_worker_with_many_independent_leaves 2022-08-26 14:08:35,657 - distributed.scheduler - INFO - State start
-2022-08-26 14:08:35,658 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:08:35,659 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:36149
-2022-08-26 14:08:35,659 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:43381
-2022-08-26 14:08:35,663 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40821
-2022-08-26 14:08:35,663 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40821
-2022-08-26 14:08:35,663 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:08:35,663 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43107
-2022-08-26 14:08:35,663 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36149
-2022-08-26 14:08:35,663 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:35,663 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:35,663 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:35,663 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-h9boiswm
-2022-08-26 14:08:35,663 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:35,664 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:46237
-2022-08-26 14:08:35,664 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:46237
-2022-08-26 14:08:35,664 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:08:35,664 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39253
-2022-08-26 14:08:35,664 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36149
-2022-08-26 14:08:35,664 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:35,664 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:08:35,664 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:35,664 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-51iz5zl6
-2022-08-26 14:08:35,664 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:35,667 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40821', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:35,667 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40821
-2022-08-26 14:08:35,667 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:35,668 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:46237', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:35,668 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:46237
-2022-08-26 14:08:35,668 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:35,668 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36149
-2022-08-26 14:08:35,668 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:35,669 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36149
-2022-08-26 14:08:35,669 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:35,669 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:35,669 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:35,683 - distributed.scheduler - INFO - Receive client connection: Client-3fa0d918-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:35,683 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:35,813 - distributed.scheduler - INFO - Remove client Client-3fa0d918-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:35,813 - distributed.scheduler - INFO - Remove client Client-3fa0d918-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:35,813 - distributed.scheduler - INFO - Close client connection: Client-3fa0d918-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:35,814 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:40821
-2022-08-26 14:08:35,815 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:46237
-2022-08-26 14:08:35,815 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:40821', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:35,816 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:40821
-2022-08-26 14:08:35,816 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:46237', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:35,816 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:46237
-2022-08-26 14:08:35,816 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:08:35,816 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-30298c96-10eb-4a96-b614-6ff95350742e Address tcp://127.0.0.1:40821 Status: Status.closing
-2022-08-26 14:08:35,816 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-5aa98d94-52c6-43f6-b16a-802f81543825 Address tcp://127.0.0.1:46237 Status: Status.closing
-2022-08-26 14:08:35,817 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:08:35,817 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:08:36,024 - distributed.utils_perf - WARNING - full garbage collections took 79% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_decide_worker_with_restrictions 2022-08-26 14:08:36,030 - distributed.scheduler - INFO - State start
-2022-08-26 14:08:36,032 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:08:36,032 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:34207
-2022-08-26 14:08:36,032 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:34431
-2022-08-26 14:08:36,038 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34615
-2022-08-26 14:08:36,038 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34615
-2022-08-26 14:08:36,038 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:08:36,038 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36131
-2022-08-26 14:08:36,039 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34207
-2022-08-26 14:08:36,039 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:36,039 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:36,039 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:36,039 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-vikh4jmy
-2022-08-26 14:08:36,039 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:36,039 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39949
-2022-08-26 14:08:36,039 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39949
-2022-08-26 14:08:36,039 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:08:36,039 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45389
-2022-08-26 14:08:36,039 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34207
-2022-08-26 14:08:36,039 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:36,039 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:36,040 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:36,040 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-f7r4uofy
-2022-08-26 14:08:36,040 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:36,040 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:41967
-2022-08-26 14:08:36,040 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:41967
-2022-08-26 14:08:36,040 - distributed.worker - INFO -           Worker name:                          2
-2022-08-26 14:08:36,040 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34793
-2022-08-26 14:08:36,040 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34207
-2022-08-26 14:08:36,040 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:36,040 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:36,040 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:36,040 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-zwbq5je9
-2022-08-26 14:08:36,041 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:36,044 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:34615', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:36,044 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:34615
-2022-08-26 14:08:36,045 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:36,045 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39949', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:36,045 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39949
-2022-08-26 14:08:36,045 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:36,045 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:41967', name: 2, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:36,046 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:41967
-2022-08-26 14:08:36,046 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:36,046 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34207
-2022-08-26 14:08:36,046 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:36,046 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34207
-2022-08-26 14:08:36,046 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:36,047 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34207
-2022-08-26 14:08:36,047 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:36,047 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:36,047 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:36,047 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:36,061 - distributed.scheduler - INFO - Receive client connection: Client-3fda9377-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:36,061 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:36,084 - distributed.scheduler - INFO - Remove client Client-3fda9377-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:36,085 - distributed.scheduler - INFO - Remove client Client-3fda9377-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:36,085 - distributed.scheduler - INFO - Close client connection: Client-3fda9377-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:36,086 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:34615
-2022-08-26 14:08:36,086 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:39949
-2022-08-26 14:08:36,086 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:41967
-2022-08-26 14:08:36,087 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:34615', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:36,087 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:34615
-2022-08-26 14:08:36,088 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:41967', name: 2, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:36,088 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:41967
-2022-08-26 14:08:36,088 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-8bc5297c-1a17-46e5-9ca7-1a653fd4657b Address tcp://127.0.0.1:34615 Status: Status.closing
-2022-08-26 14:08:36,088 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-a3f09529-68cc-4808-a179-24e63e1532fb Address tcp://127.0.0.1:41967 Status: Status.closing
-2022-08-26 14:08:36,088 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-4e3a6fde-c3fc-4330-acd3-9570c59fb5b9 Address tcp://127.0.0.1:39949 Status: Status.closing
-2022-08-26 14:08:36,089 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:39949', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:36,089 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:39949
-2022-08-26 14:08:36,089 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:08:36,090 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:08:36,090 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:08:36,297 - distributed.utils_perf - WARNING - full garbage collections took 79% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_decide_worker_coschedule_order_neighbors[nthreads0-0] 2022-08-26 14:08:36,303 - distributed.scheduler - INFO - State start
-2022-08-26 14:08:36,305 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:08:36,305 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:33129
-2022-08-26 14:08:36,305 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:35135
-2022-08-26 14:08:36,315 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35137
-2022-08-26 14:08:36,315 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35137
-2022-08-26 14:08:36,315 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:08:36,315 - distributed.worker - INFO -          dashboard at:            127.0.0.1:44415
-2022-08-26 14:08:36,315 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33129
-2022-08-26 14:08:36,315 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:36,315 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:36,315 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:36,315 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-8rzr7j00
-2022-08-26 14:08:36,315 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:36,316 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34255
-2022-08-26 14:08:36,316 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34255
-2022-08-26 14:08:36,316 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:08:36,316 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35323
-2022-08-26 14:08:36,316 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33129
-2022-08-26 14:08:36,316 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:36,316 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:36,316 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:36,316 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-40cv7f4n
-2022-08-26 14:08:36,316 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:36,316 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37811
-2022-08-26 14:08:36,317 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37811
-2022-08-26 14:08:36,317 - distributed.worker - INFO -           Worker name:                          2
-2022-08-26 14:08:36,317 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41563
-2022-08-26 14:08:36,317 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33129
-2022-08-26 14:08:36,317 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:36,317 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:36,317 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:36,317 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-p1hdsn13
-2022-08-26 14:08:36,317 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:36,317 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45439
-2022-08-26 14:08:36,317 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45439
-2022-08-26 14:08:36,318 - distributed.worker - INFO -           Worker name:                          3
-2022-08-26 14:08:36,318 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36081
-2022-08-26 14:08:36,318 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33129
-2022-08-26 14:08:36,318 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:36,318 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:36,318 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:36,318 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-7ziec55w
-2022-08-26 14:08:36,318 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:36,318 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38369
-2022-08-26 14:08:36,318 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38369
-2022-08-26 14:08:36,318 - distributed.worker - INFO -           Worker name:                          4
-2022-08-26 14:08:36,319 - distributed.worker - INFO -          dashboard at:            127.0.0.1:44305
-2022-08-26 14:08:36,319 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33129
-2022-08-26 14:08:36,319 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:36,319 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:36,319 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:36,319 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-7i4pxkc6
-2022-08-26 14:08:36,319 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:36,325 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:35137', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:36,325 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:35137
-2022-08-26 14:08:36,325 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:36,325 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:34255', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:36,325 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:34255
-2022-08-26 14:08:36,326 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:36,326 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37811', name: 2, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:36,326 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37811
-2022-08-26 14:08:36,326 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:36,327 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45439', name: 3, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:36,327 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45439
-2022-08-26 14:08:36,327 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:36,327 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38369', name: 4, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:36,327 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38369
-2022-08-26 14:08:36,327 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:36,328 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33129
-2022-08-26 14:08:36,328 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:36,328 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33129
-2022-08-26 14:08:36,328 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:36,328 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33129
-2022-08-26 14:08:36,328 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:36,329 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33129
-2022-08-26 14:08:36,329 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:36,329 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33129
-2022-08-26 14:08:36,329 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:36,329 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:36,329 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:36,329 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:36,329 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:36,330 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:36,345 - distributed.scheduler - INFO - Receive client connection: Client-4005e323-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:36,345 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:36,639 - distributed.scheduler - INFO - Remove client Client-4005e323-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:36,639 - distributed.scheduler - INFO - Remove client Client-4005e323-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:36,639 - distributed.scheduler - INFO - Close client connection: Client-4005e323-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:36,640 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:35137
-2022-08-26 14:08:36,640 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:34255
-2022-08-26 14:08:36,640 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:37811
-2022-08-26 14:08:36,641 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:45439
-2022-08-26 14:08:36,641 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:38369
-2022-08-26 14:08:36,643 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:35137', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:36,643 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:35137
-2022-08-26 14:08:36,643 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:34255', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:36,643 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:34255
-2022-08-26 14:08:36,643 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:37811', name: 2, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:36,643 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:37811
-2022-08-26 14:08:36,643 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:45439', name: 3, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:36,643 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:45439
-2022-08-26 14:08:36,644 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:38369', name: 4, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:36,644 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:38369
-2022-08-26 14:08:36,644 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:08:36,644 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-b2b084b1-bfe6-4098-8811-019c0f333283 Address tcp://127.0.0.1:35137 Status: Status.closing
-2022-08-26 14:08:36,644 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-b43cdc27-8343-4a9b-872f-ff3ed247e1b8 Address tcp://127.0.0.1:34255 Status: Status.closing
-2022-08-26 14:08:36,644 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-309af82e-763e-4ad6-a9c8-92b6081938a5 Address tcp://127.0.0.1:37811 Status: Status.closing
-2022-08-26 14:08:36,645 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-967b9bfe-06e2-4866-a929-8af2b032c384 Address tcp://127.0.0.1:45439 Status: Status.closing
-2022-08-26 14:08:36,645 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-000bef85-19dc-404e-93e0-a25ca90d5094 Address tcp://127.0.0.1:38369 Status: Status.closing
-2022-08-26 14:08:36,647 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:08:36,647 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:08:36,857 - distributed.utils_perf - WARNING - full garbage collections took 76% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_decide_worker_coschedule_order_neighbors[nthreads0-1] 2022-08-26 14:08:36,864 - distributed.scheduler - INFO - State start
-2022-08-26 14:08:36,866 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:08:36,866 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:34461
-2022-08-26 14:08:36,866 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:36175
-2022-08-26 14:08:36,875 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39697
-2022-08-26 14:08:36,876 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39697
-2022-08-26 14:08:36,876 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:08:36,876 - distributed.worker - INFO -          dashboard at:            127.0.0.1:32773
-2022-08-26 14:08:36,876 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34461
-2022-08-26 14:08:36,876 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:36,876 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:36,876 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:36,876 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-mm9lch6z
-2022-08-26 14:08:36,876 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:36,876 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34251
-2022-08-26 14:08:36,876 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34251
-2022-08-26 14:08:36,877 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:08:36,877 - distributed.worker - INFO -          dashboard at:            127.0.0.1:38055
-2022-08-26 14:08:36,877 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34461
-2022-08-26 14:08:36,877 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:36,877 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:36,877 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:36,877 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-a7fmn4bk
-2022-08-26 14:08:36,877 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:36,877 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42197
-2022-08-26 14:08:36,877 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42197
-2022-08-26 14:08:36,877 - distributed.worker - INFO -           Worker name:                          2
-2022-08-26 14:08:36,878 - distributed.worker - INFO -          dashboard at:            127.0.0.1:38787
-2022-08-26 14:08:36,878 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34461
-2022-08-26 14:08:36,878 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:36,878 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:36,878 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:36,878 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-yylk70hf
-2022-08-26 14:08:36,878 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:36,878 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33223
-2022-08-26 14:08:36,878 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33223
-2022-08-26 14:08:36,878 - distributed.worker - INFO -           Worker name:                          3
-2022-08-26 14:08:36,878 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33313
-2022-08-26 14:08:36,879 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34461
-2022-08-26 14:08:36,879 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:36,879 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:36,879 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:36,879 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-kcm18p0f
-2022-08-26 14:08:36,879 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:36,879 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37499
-2022-08-26 14:08:36,879 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37499
-2022-08-26 14:08:36,879 - distributed.worker - INFO -           Worker name:                          4
-2022-08-26 14:08:36,879 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46183
-2022-08-26 14:08:36,879 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34461
-2022-08-26 14:08:36,880 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:36,880 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:36,880 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:36,880 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-wxgyge29
-2022-08-26 14:08:36,880 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:36,885 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39697', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:36,886 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39697
-2022-08-26 14:08:36,886 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:36,886 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:34251', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:36,886 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:34251
-2022-08-26 14:08:36,886 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:36,887 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42197', name: 2, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:36,887 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42197
-2022-08-26 14:08:36,887 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:36,887 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33223', name: 3, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:36,888 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33223
-2022-08-26 14:08:36,888 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:36,888 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37499', name: 4, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:36,888 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37499
-2022-08-26 14:08:36,888 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:36,889 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34461
-2022-08-26 14:08:36,889 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:36,889 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34461
-2022-08-26 14:08:36,889 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:36,889 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34461
-2022-08-26 14:08:36,889 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:36,890 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34461
-2022-08-26 14:08:36,890 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:36,890 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34461
-2022-08-26 14:08:36,890 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:36,890 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:36,890 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:36,890 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:36,890 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:36,890 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:36,906 - distributed.scheduler - INFO - Receive client connection: Client-405b79ec-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:36,906 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:37,167 - distributed.scheduler - INFO - Remove client Client-405b79ec-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:37,167 - distributed.scheduler - INFO - Remove client Client-405b79ec-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:37,168 - distributed.scheduler - INFO - Close client connection: Client-405b79ec-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:37,168 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:39697
-2022-08-26 14:08:37,169 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:34251
-2022-08-26 14:08:37,169 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42197
-2022-08-26 14:08:37,169 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:33223
-2022-08-26 14:08:37,169 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:37499
-2022-08-26 14:08:37,171 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:39697', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:37,171 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:39697
-2022-08-26 14:08:37,171 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:34251', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:37,171 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:34251
-2022-08-26 14:08:37,171 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:42197', name: 2, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:37,172 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:42197
-2022-08-26 14:08:37,172 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:33223', name: 3, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:37,172 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:33223
-2022-08-26 14:08:37,172 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:37499', name: 4, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:37,172 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:37499
-2022-08-26 14:08:37,172 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:08:37,172 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-9f9203b9-c620-439b-88f3-1df928365c60 Address tcp://127.0.0.1:39697 Status: Status.closing
-2022-08-26 14:08:37,173 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-79512453-84b9-45fc-8fe6-9f5cc6aaf2c0 Address tcp://127.0.0.1:34251 Status: Status.closing
-2022-08-26 14:08:37,173 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-e95b4845-68d2-4a12-8e90-fd2cf5c30b7c Address tcp://127.0.0.1:42197 Status: Status.closing
-2022-08-26 14:08:37,173 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-60457308-f267-4ad1-aa6a-5c7e77211d93 Address tcp://127.0.0.1:33223 Status: Status.closing
-2022-08-26 14:08:37,173 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-65e87034-c2b8-4737-8f99-77c01614ea2f Address tcp://127.0.0.1:37499 Status: Status.closing
-2022-08-26 14:08:37,175 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:08:37,175 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:08:37,389 - distributed.utils_perf - WARNING - full garbage collections took 74% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_decide_worker_coschedule_order_neighbors[nthreads0-4] 2022-08-26 14:08:37,396 - distributed.scheduler - INFO - State start
-2022-08-26 14:08:37,397 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:08:37,397 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:35649
-2022-08-26 14:08:37,398 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:35945
-2022-08-26 14:08:37,407 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33619
-2022-08-26 14:08:37,407 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33619
-2022-08-26 14:08:37,407 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:08:37,407 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34743
-2022-08-26 14:08:37,407 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35649
-2022-08-26 14:08:37,407 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:37,408 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:37,408 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:37,408 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-czskny_d
-2022-08-26 14:08:37,408 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:37,408 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37119
-2022-08-26 14:08:37,408 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37119
-2022-08-26 14:08:37,408 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:08:37,408 - distributed.worker - INFO -          dashboard at:            127.0.0.1:38393
-2022-08-26 14:08:37,408 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35649
-2022-08-26 14:08:37,408 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:37,408 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:37,409 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:37,409 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-scgtg4jo
-2022-08-26 14:08:37,409 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:37,409 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35029
-2022-08-26 14:08:37,409 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35029
-2022-08-26 14:08:37,409 - distributed.worker - INFO -           Worker name:                          2
-2022-08-26 14:08:37,409 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39463
-2022-08-26 14:08:37,409 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35649
-2022-08-26 14:08:37,409 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:37,409 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:37,409 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:37,410 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-2bav1vfb
-2022-08-26 14:08:37,410 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:37,410 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33405
-2022-08-26 14:08:37,410 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33405
-2022-08-26 14:08:37,410 - distributed.worker - INFO -           Worker name:                          3
-2022-08-26 14:08:37,410 - distributed.worker - INFO -          dashboard at:            127.0.0.1:32921
-2022-08-26 14:08:37,410 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35649
-2022-08-26 14:08:37,410 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:37,410 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:37,410 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:37,410 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-if28wxdu
-2022-08-26 14:08:37,411 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:37,411 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33527
-2022-08-26 14:08:37,411 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33527
-2022-08-26 14:08:37,411 - distributed.worker - INFO -           Worker name:                          4
-2022-08-26 14:08:37,411 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43055
-2022-08-26 14:08:37,411 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35649
-2022-08-26 14:08:37,411 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:37,411 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:37,411 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:37,411 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-fd4dj6mx
-2022-08-26 14:08:37,411 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:37,417 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33619', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:37,417 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33619
-2022-08-26 14:08:37,418 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:37,418 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37119', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:37,418 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37119
-2022-08-26 14:08:37,418 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:37,419 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:35029', name: 2, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:37,419 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:35029
-2022-08-26 14:08:37,419 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:37,419 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33405', name: 3, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:37,419 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33405
-2022-08-26 14:08:37,419 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:37,420 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33527', name: 4, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:37,420 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33527
-2022-08-26 14:08:37,420 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:37,420 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35649
-2022-08-26 14:08:37,421 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:37,421 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35649
-2022-08-26 14:08:37,421 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:37,421 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35649
-2022-08-26 14:08:37,421 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:37,421 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35649
-2022-08-26 14:08:37,421 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:37,422 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35649
-2022-08-26 14:08:37,422 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:37,422 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:37,422 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:37,422 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:37,422 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:37,422 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:37,438 - distributed.scheduler - INFO - Receive client connection: Client-40ac9ff6-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:37,438 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:37,743 - distributed.scheduler - INFO - Remove client Client-40ac9ff6-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:37,744 - distributed.scheduler - INFO - Remove client Client-40ac9ff6-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:37,744 - distributed.scheduler - INFO - Close client connection: Client-40ac9ff6-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:37,744 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:33619
-2022-08-26 14:08:37,745 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:37119
-2022-08-26 14:08:37,745 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:35029
-2022-08-26 14:08:37,745 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:33405
-2022-08-26 14:08:37,746 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:33527
-2022-08-26 14:08:37,747 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:33619', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:37,747 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:33619
-2022-08-26 14:08:37,747 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:37119', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:37,748 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:37119
-2022-08-26 14:08:37,748 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:35029', name: 2, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:37,748 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:35029
-2022-08-26 14:08:37,748 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:33405', name: 3, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:37,748 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:33405
-2022-08-26 14:08:37,748 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:33527', name: 4, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:37,748 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:33527
-2022-08-26 14:08:37,748 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:08:37,748 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-156bb9df-e133-482f-b421-292d3baa5cb5 Address tcp://127.0.0.1:33619 Status: Status.closing
-2022-08-26 14:08:37,749 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-c463d4f0-a73c-4925-9e4e-d2211d5f5749 Address tcp://127.0.0.1:37119 Status: Status.closing
-2022-08-26 14:08:37,749 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-315df177-5e90-4b82-9b70-f92ec4b42910 Address tcp://127.0.0.1:35029 Status: Status.closing
-2022-08-26 14:08:37,749 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-e27db157-d5e5-4a0a-bd2d-d43019e80854 Address tcp://127.0.0.1:33405 Status: Status.closing
-2022-08-26 14:08:37,749 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-c1a5f9a0-5343-460c-bfb4-c1ab2f803c52 Address tcp://127.0.0.1:33527 Status: Status.closing
-2022-08-26 14:08:37,753 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:08:37,753 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:08:37,967 - distributed.utils_perf - WARNING - full garbage collections took 71% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_decide_worker_coschedule_order_neighbors[nthreads1-0] 2022-08-26 14:08:37,973 - distributed.scheduler - INFO - State start
-2022-08-26 14:08:37,975 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:08:37,975 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:36181
-2022-08-26 14:08:37,975 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:38161
-2022-08-26 14:08:37,981 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:44779
-2022-08-26 14:08:37,981 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:44779
-2022-08-26 14:08:37,981 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:08:37,981 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46499
-2022-08-26 14:08:37,982 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36181
-2022-08-26 14:08:37,982 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:37,982 - distributed.worker - INFO -               Threads:                          3
-2022-08-26 14:08:37,982 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:37,982 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-cpkw2s8n
-2022-08-26 14:08:37,982 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:37,982 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40393
-2022-08-26 14:08:37,982 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40393
-2022-08-26 14:08:37,982 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:08:37,982 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39461
-2022-08-26 14:08:37,983 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36181
-2022-08-26 14:08:37,983 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:37,983 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:08:37,983 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:37,983 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-p1f8dbz5
-2022-08-26 14:08:37,983 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:37,983 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43029
-2022-08-26 14:08:37,983 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43029
-2022-08-26 14:08:37,983 - distributed.worker - INFO -           Worker name:                          2
-2022-08-26 14:08:37,983 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43251
-2022-08-26 14:08:37,983 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36181
-2022-08-26 14:08:37,984 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:37,984 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:37,984 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:37,984 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-dhh3rfnw
-2022-08-26 14:08:37,984 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:37,988 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:44779', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:37,988 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:44779
-2022-08-26 14:08:37,988 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:37,988 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40393', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:37,989 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40393
-2022-08-26 14:08:37,989 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:37,989 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:43029', name: 2, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:37,989 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:43029
-2022-08-26 14:08:37,989 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:37,990 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36181
-2022-08-26 14:08:37,990 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:37,990 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36181
-2022-08-26 14:08:37,990 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:37,990 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36181
-2022-08-26 14:08:37,990 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:37,990 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:37,991 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:37,991 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:38,004 - distributed.scheduler - INFO - Receive client connection: Client-410320e6-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:38,005 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:38,257 - distributed.scheduler - INFO - Remove client Client-410320e6-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:38,257 - distributed.scheduler - INFO - Remove client Client-410320e6-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:38,258 - distributed.scheduler - INFO - Close client connection: Client-410320e6-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:38,258 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:44779
-2022-08-26 14:08:38,258 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:40393
-2022-08-26 14:08:38,259 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:43029
-2022-08-26 14:08:38,260 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:44779', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:38,260 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:44779
-2022-08-26 14:08:38,260 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:40393', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:38,260 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:40393
-2022-08-26 14:08:38,260 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:43029', name: 2, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:38,260 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:43029
-2022-08-26 14:08:38,260 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:08:38,261 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-441c0e2a-fb70-4e5c-8ca5-5cfefcd783e3 Address tcp://127.0.0.1:44779 Status: Status.closing
-2022-08-26 14:08:38,261 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-f84cb1b1-25e0-43e9-a9e2-78ee73571f2a Address tcp://127.0.0.1:40393 Status: Status.closing
-2022-08-26 14:08:38,261 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-fc734ab6-a2a7-49a8-85e1-839455e72ba6 Address tcp://127.0.0.1:43029 Status: Status.closing
-2022-08-26 14:08:38,263 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:08:38,263 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:08:38,479 - distributed.utils_perf - WARNING - full garbage collections took 70% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_decide_worker_coschedule_order_neighbors[nthreads1-1] 2022-08-26 14:08:38,485 - distributed.scheduler - INFO - State start
-2022-08-26 14:08:38,487 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:08:38,487 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:42701
-2022-08-26 14:08:38,487 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:35763
-2022-08-26 14:08:38,493 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:44345
-2022-08-26 14:08:38,494 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:44345
-2022-08-26 14:08:38,494 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:08:38,494 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45149
-2022-08-26 14:08:38,494 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:42701
-2022-08-26 14:08:38,494 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:38,494 - distributed.worker - INFO -               Threads:                          3
-2022-08-26 14:08:38,494 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:38,494 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-guq714wj
-2022-08-26 14:08:38,494 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:38,494 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42005
-2022-08-26 14:08:38,495 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42005
-2022-08-26 14:08:38,495 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:08:38,495 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43621
-2022-08-26 14:08:38,495 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:42701
-2022-08-26 14:08:38,495 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:38,495 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:08:38,495 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:38,495 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-01wpwf1q
-2022-08-26 14:08:38,495 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:38,495 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42119
-2022-08-26 14:08:38,495 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42119
-2022-08-26 14:08:38,495 - distributed.worker - INFO -           Worker name:                          2
-2022-08-26 14:08:38,496 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43727
-2022-08-26 14:08:38,496 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:42701
-2022-08-26 14:08:38,496 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:38,496 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:38,496 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:38,496 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-nt4cx8l6
-2022-08-26 14:08:38,496 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:38,500 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:44345', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:38,500 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:44345
-2022-08-26 14:08:38,500 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:38,500 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42005', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:38,501 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42005
-2022-08-26 14:08:38,501 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:38,501 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42119', name: 2, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:38,501 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42119
-2022-08-26 14:08:38,501 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:38,502 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:42701
-2022-08-26 14:08:38,502 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:38,502 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:42701
-2022-08-26 14:08:38,502 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:38,502 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:42701
-2022-08-26 14:08:38,502 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:38,503 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:38,503 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:38,503 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:38,517 - distributed.scheduler - INFO - Receive client connection: Client-4151483a-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:38,517 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:38,758 - distributed.scheduler - INFO - Remove client Client-4151483a-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:38,758 - distributed.scheduler - INFO - Remove client Client-4151483a-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:38,759 - distributed.scheduler - INFO - Close client connection: Client-4151483a-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:38,759 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:44345
-2022-08-26 14:08:38,760 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42005
-2022-08-26 14:08:38,760 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42119
-2022-08-26 14:08:38,761 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:44345', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:38,761 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:44345
-2022-08-26 14:08:38,761 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:42005', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:38,761 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:42005
-2022-08-26 14:08:38,761 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:42119', name: 2, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:38,762 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:42119
-2022-08-26 14:08:38,762 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:08:38,762 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-f913eb48-1b29-4f46-ac36-f51c6a0d5889 Address tcp://127.0.0.1:44345 Status: Status.closing
-2022-08-26 14:08:38,762 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-86f90e32-b189-4065-a7f1-9bcbbd82a01a Address tcp://127.0.0.1:42005 Status: Status.closing
-2022-08-26 14:08:38,762 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-f0daed23-d86a-44b4-adc3-95d44b5f34a9 Address tcp://127.0.0.1:42119 Status: Status.closing
-2022-08-26 14:08:38,765 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:08:38,765 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:08:38,977 - distributed.utils_perf - WARNING - full garbage collections took 69% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_decide_worker_coschedule_order_neighbors[nthreads1-4] 2022-08-26 14:08:38,983 - distributed.scheduler - INFO - State start
-2022-08-26 14:08:38,985 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:08:38,985 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:40121
-2022-08-26 14:08:38,985 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:41507
-2022-08-26 14:08:38,991 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45153
-2022-08-26 14:08:38,991 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45153
-2022-08-26 14:08:38,991 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:08:38,991 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36815
-2022-08-26 14:08:38,992 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:40121
-2022-08-26 14:08:38,992 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:38,992 - distributed.worker - INFO -               Threads:                          3
-2022-08-26 14:08:38,992 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:38,992 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-lk2w8cqu
-2022-08-26 14:08:38,992 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:38,992 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:32985
-2022-08-26 14:08:38,992 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:32985
-2022-08-26 14:08:38,992 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:08:38,992 - distributed.worker - INFO -          dashboard at:            127.0.0.1:44813
-2022-08-26 14:08:38,992 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:40121
-2022-08-26 14:08:38,993 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:38,993 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:08:38,993 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:38,993 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-cz2pw94p
-2022-08-26 14:08:38,993 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:38,993 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35611
-2022-08-26 14:08:38,993 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35611
-2022-08-26 14:08:38,993 - distributed.worker - INFO -           Worker name:                          2
-2022-08-26 14:08:38,993 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33119
-2022-08-26 14:08:38,993 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:40121
-2022-08-26 14:08:38,993 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:38,993 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:38,994 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:38,994 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-cb86iy3i
-2022-08-26 14:08:38,994 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:38,997 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45153', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:38,998 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45153
-2022-08-26 14:08:38,998 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:38,998 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:32985', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:38,998 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:32985
-2022-08-26 14:08:38,998 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:38,999 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:35611', name: 2, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:38,999 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:35611
-2022-08-26 14:08:38,999 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:38,999 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:40121
-2022-08-26 14:08:38,999 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:38,999 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:40121
-2022-08-26 14:08:38,999 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:39,000 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:40121
-2022-08-26 14:08:39,000 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:39,000 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:39,000 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:39,000 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:39,014 - distributed.scheduler - INFO - Receive client connection: Client-419d2cf3-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:39,014 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:39,283 - distributed.scheduler - INFO - Remove client Client-419d2cf3-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:39,283 - distributed.scheduler - INFO - Remove client Client-419d2cf3-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:39,283 - distributed.scheduler - INFO - Close client connection: Client-419d2cf3-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:39,284 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:45153
-2022-08-26 14:08:39,284 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:32985
-2022-08-26 14:08:39,284 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:35611
-2022-08-26 14:08:39,286 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:45153', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:39,286 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:45153
-2022-08-26 14:08:39,286 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:32985', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:39,286 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:32985
-2022-08-26 14:08:39,286 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:35611', name: 2, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:39,286 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:35611
-2022-08-26 14:08:39,286 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:08:39,286 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-124e1dfe-749c-42df-a836-45e8150b844e Address tcp://127.0.0.1:45153 Status: Status.closing
-2022-08-26 14:08:39,287 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-ee046f4a-7d10-4a4a-b60c-18e5b04cb1d4 Address tcp://127.0.0.1:32985 Status: Status.closing
-2022-08-26 14:08:39,287 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-bf319416-4dbf-475d-9ea0-1e5515c6f823 Address tcp://127.0.0.1:35611 Status: Status.closing
-2022-08-26 14:08:39,289 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:08:39,289 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:08:39,502 - distributed.utils_perf - WARNING - full garbage collections took 67% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_move_data_over_break_restrictions 2022-08-26 14:08:39,509 - distributed.scheduler - INFO - State start
-2022-08-26 14:08:39,510 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:08:39,510 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:41023
-2022-08-26 14:08:39,510 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:38949
-2022-08-26 14:08:39,517 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38873
-2022-08-26 14:08:39,517 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38873
-2022-08-26 14:08:39,517 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:08:39,517 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45107
-2022-08-26 14:08:39,517 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:41023
-2022-08-26 14:08:39,517 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:39,517 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:39,517 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:39,517 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-hfryc0c2
-2022-08-26 14:08:39,517 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:39,518 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42049
-2022-08-26 14:08:39,518 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42049
-2022-08-26 14:08:39,518 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:08:39,518 - distributed.worker - INFO -          dashboard at:            127.0.0.1:38755
-2022-08-26 14:08:39,518 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:41023
-2022-08-26 14:08:39,518 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:39,518 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:39,518 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:39,518 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-hvlyfsv_
-2022-08-26 14:08:39,518 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:39,518 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42885
-2022-08-26 14:08:39,519 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42885
-2022-08-26 14:08:39,519 - distributed.worker - INFO -           Worker name:                          2
-2022-08-26 14:08:39,519 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45577
-2022-08-26 14:08:39,519 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:41023
-2022-08-26 14:08:39,519 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:39,519 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:39,519 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:39,519 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-q6n1tcbd
-2022-08-26 14:08:39,519 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:39,523 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38873', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:39,523 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38873
-2022-08-26 14:08:39,523 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:39,523 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42049', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:39,524 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42049
-2022-08-26 14:08:39,524 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:39,524 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42885', name: 2, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:39,524 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42885
-2022-08-26 14:08:39,524 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:39,525 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:41023
-2022-08-26 14:08:39,525 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:39,525 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:41023
-2022-08-26 14:08:39,525 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:39,525 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:41023
-2022-08-26 14:08:39,525 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:39,526 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:39,526 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:39,526 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:39,540 - distributed.scheduler - INFO - Receive client connection: Client-41ed614f-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:39,540 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:39,562 - distributed.scheduler - INFO - Remove client Client-41ed614f-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:39,562 - distributed.scheduler - INFO - Remove client Client-41ed614f-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:39,563 - distributed.scheduler - INFO - Close client connection: Client-41ed614f-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:39,564 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:38873
-2022-08-26 14:08:39,564 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42049
-2022-08-26 14:08:39,564 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42885
-2022-08-26 14:08:39,565 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:38873', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:39,565 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:38873
-2022-08-26 14:08:39,566 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:42885', name: 2, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:39,566 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:42885
-2022-08-26 14:08:39,566 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-71d1ffb2-2c1c-431b-9dec-f46c28646ec8 Address tcp://127.0.0.1:38873 Status: Status.closing
-2022-08-26 14:08:39,566 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-fd15d9ef-dbb2-4fbe-9e75-d0a6561ef873 Address tcp://127.0.0.1:42885 Status: Status.closing
-2022-08-26 14:08:39,566 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-4bb68785-3a6f-4e79-8f79-e83350a7a4bc Address tcp://127.0.0.1:42049 Status: Status.closing
-2022-08-26 14:08:39,567 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:42049', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:39,567 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:42049
-2022-08-26 14:08:39,567 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:08:39,568 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:08:39,568 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:08:39,777 - distributed.utils_perf - WARNING - full garbage collections took 67% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_balance_with_restrictions 2022-08-26 14:08:39,784 - distributed.scheduler - INFO - State start
-2022-08-26 14:08:39,785 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:08:39,785 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:35919
-2022-08-26 14:08:39,786 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:44149
-2022-08-26 14:08:39,792 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39377
-2022-08-26 14:08:39,792 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39377
-2022-08-26 14:08:39,792 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:08:39,792 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36881
-2022-08-26 14:08:39,792 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35919
-2022-08-26 14:08:39,792 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:39,792 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:39,792 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:39,792 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-cjzl0nec
-2022-08-26 14:08:39,792 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:39,793 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43509
-2022-08-26 14:08:39,793 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43509
-2022-08-26 14:08:39,793 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:08:39,793 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36279
-2022-08-26 14:08:39,793 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35919
-2022-08-26 14:08:39,793 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:39,793 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:39,793 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:39,793 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-up886dj8
-2022-08-26 14:08:39,793 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:39,794 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35643
-2022-08-26 14:08:39,794 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35643
-2022-08-26 14:08:39,794 - distributed.worker - INFO -           Worker name:                          2
-2022-08-26 14:08:39,794 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39921
-2022-08-26 14:08:39,794 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35919
-2022-08-26 14:08:39,794 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:39,794 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:39,794 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:39,794 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-8sqvz92f
-2022-08-26 14:08:39,794 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:39,798 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39377', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:39,798 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39377
-2022-08-26 14:08:39,798 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:39,799 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:43509', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:39,799 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:43509
-2022-08-26 14:08:39,799 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:39,799 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:35643', name: 2, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:39,800 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:35643
-2022-08-26 14:08:39,800 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:39,800 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35919
-2022-08-26 14:08:39,800 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:39,800 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35919
-2022-08-26 14:08:39,800 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:39,801 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35919
-2022-08-26 14:08:39,801 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:39,801 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:39,801 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:39,801 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:39,815 - distributed.scheduler - INFO - Receive client connection: Client-42176388-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:39,815 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:39,837 - distributed.scheduler - INFO - Remove client Client-42176388-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:39,837 - distributed.scheduler - INFO - Remove client Client-42176388-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:39,838 - distributed.scheduler - INFO - Close client connection: Client-42176388-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:39,839 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:39377
-2022-08-26 14:08:39,839 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:43509
-2022-08-26 14:08:39,840 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:35643
-2022-08-26 14:08:39,841 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:43509', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:39,841 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:43509
-2022-08-26 14:08:39,841 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-480beae3-a1ef-405e-91c4-d5c9e7d32d1b Address tcp://127.0.0.1:43509 Status: Status.closing
-2022-08-26 14:08:39,841 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-af4cf2ac-9024-416b-a147-3a7a151e1c9a Address tcp://127.0.0.1:39377 Status: Status.closing
-2022-08-26 14:08:39,842 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-c4fc5700-ea26-432b-9553-a67f2baccade Address tcp://127.0.0.1:35643 Status: Status.closing
-2022-08-26 14:08:39,842 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:39377', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:39,842 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:39377
-2022-08-26 14:08:39,842 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:35643', name: 2, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:39,842 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:35643
-2022-08-26 14:08:39,842 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:08:39,843 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:08:39,843 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:08:40,049 - distributed.utils_perf - WARNING - full garbage collections took 67% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_no_valid_workers 2022-08-26 14:08:40,055 - distributed.scheduler - INFO - State start
-2022-08-26 14:08:40,057 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:08:40,057 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:35585
-2022-08-26 14:08:40,057 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:37475
-2022-08-26 14:08:40,063 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34383
-2022-08-26 14:08:40,063 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34383
-2022-08-26 14:08:40,063 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:08:40,064 - distributed.worker - INFO -          dashboard at:            127.0.0.1:38275
-2022-08-26 14:08:40,064 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35585
-2022-08-26 14:08:40,064 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:40,064 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:40,064 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:40,064 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-r9anpz36
-2022-08-26 14:08:40,064 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:40,064 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33013
-2022-08-26 14:08:40,064 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33013
-2022-08-26 14:08:40,064 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:08:40,065 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39989
-2022-08-26 14:08:40,065 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35585
-2022-08-26 14:08:40,065 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:40,065 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:40,065 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:40,065 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-3ufwobnl
-2022-08-26 14:08:40,065 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:40,065 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45681
-2022-08-26 14:08:40,065 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45681
-2022-08-26 14:08:40,065 - distributed.worker - INFO -           Worker name:                          2
-2022-08-26 14:08:40,065 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45101
-2022-08-26 14:08:40,066 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35585
-2022-08-26 14:08:40,066 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:40,066 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:40,066 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:40,066 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-pa12p2wq
-2022-08-26 14:08:40,066 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:40,070 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:34383', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:40,070 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:34383
-2022-08-26 14:08:40,070 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:40,070 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33013', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:40,070 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33013
-2022-08-26 14:08:40,071 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:40,071 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45681', name: 2, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:40,071 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45681
-2022-08-26 14:08:40,071 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:40,072 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35585
-2022-08-26 14:08:40,072 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:40,072 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35585
-2022-08-26 14:08:40,072 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:40,072 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35585
-2022-08-26 14:08:40,072 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:40,072 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:40,072 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:40,073 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:40,086 - distributed.scheduler - INFO - Receive client connection: Client-4240cefe-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:40,087 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:40,159 - distributed.scheduler - INFO - Remove client Client-4240cefe-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:40,159 - distributed.scheduler - INFO - Remove client Client-4240cefe-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:40,160 - distributed.scheduler - INFO - Close client connection: Client-4240cefe-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:40,160 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:34383
-2022-08-26 14:08:40,160 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:33013
-2022-08-26 14:08:40,160 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:45681
-2022-08-26 14:08:40,162 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:34383', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:40,162 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:34383
-2022-08-26 14:08:40,162 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:33013', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:40,162 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:33013
-2022-08-26 14:08:40,162 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:45681', name: 2, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:40,162 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:45681
-2022-08-26 14:08:40,162 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:08:40,162 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-3db481e1-46d1-48a6-b175-148d978dd4bc Address tcp://127.0.0.1:34383 Status: Status.closing
-2022-08-26 14:08:40,163 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-7f79986b-d30c-40fd-a7e0-d07f810f6794 Address tcp://127.0.0.1:33013 Status: Status.closing
-2022-08-26 14:08:40,163 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-53ce35d5-3b7a-425e-bee5-b07c59454a7c Address tcp://127.0.0.1:45681 Status: Status.closing
-2022-08-26 14:08:40,164 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:08:40,164 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:08:40,369 - distributed.utils_perf - WARNING - full garbage collections took 67% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_no_valid_workers_loose_restrictions 2022-08-26 14:08:40,375 - distributed.scheduler - INFO - State start
-2022-08-26 14:08:40,377 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:08:40,377 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:44677
-2022-08-26 14:08:40,377 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:42983
-2022-08-26 14:08:40,383 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43147
-2022-08-26 14:08:40,383 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43147
-2022-08-26 14:08:40,383 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:08:40,383 - distributed.worker - INFO -          dashboard at:            127.0.0.1:38589
-2022-08-26 14:08:40,384 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44677
-2022-08-26 14:08:40,384 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:40,384 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:40,384 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:40,384 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-674_9sj_
-2022-08-26 14:08:40,384 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:40,384 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45875
-2022-08-26 14:08:40,384 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45875
-2022-08-26 14:08:40,384 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:08:40,384 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36137
-2022-08-26 14:08:40,384 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44677
-2022-08-26 14:08:40,385 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:40,385 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:40,385 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:40,385 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-j1t2sjoh
-2022-08-26 14:08:40,385 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:40,385 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42821
-2022-08-26 14:08:40,385 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42821
-2022-08-26 14:08:40,385 - distributed.worker - INFO -           Worker name:                          2
-2022-08-26 14:08:40,385 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42267
-2022-08-26 14:08:40,385 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44677
-2022-08-26 14:08:40,385 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:40,386 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:40,386 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:40,386 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-y2s8_ka2
-2022-08-26 14:08:40,386 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:40,390 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:43147', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:40,390 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:43147
-2022-08-26 14:08:40,390 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:40,390 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45875', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:40,391 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45875
-2022-08-26 14:08:40,391 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:40,391 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42821', name: 2, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:40,391 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42821
-2022-08-26 14:08:40,391 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:40,392 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44677
-2022-08-26 14:08:40,392 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:40,392 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44677
-2022-08-26 14:08:40,392 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:40,392 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44677
-2022-08-26 14:08:40,392 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:40,392 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:40,392 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:40,393 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:40,406 - distributed.scheduler - INFO - Receive client connection: Client-4271a3fa-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:40,407 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:40,430 - distributed.scheduler - INFO - Remove client Client-4271a3fa-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:40,430 - distributed.scheduler - INFO - Remove client Client-4271a3fa-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:40,431 - distributed.scheduler - INFO - Close client connection: Client-4271a3fa-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:40,431 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:43147
-2022-08-26 14:08:40,432 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:45875
-2022-08-26 14:08:40,432 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42821
-2022-08-26 14:08:40,433 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:43147', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:40,433 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:43147
-2022-08-26 14:08:40,433 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:42821', name: 2, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:40,433 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:42821
-2022-08-26 14:08:40,433 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-01dc8376-ea53-4e5f-8dcf-307cba7aff77 Address tcp://127.0.0.1:43147 Status: Status.closing
-2022-08-26 14:08:40,434 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-6ab4eabd-95d2-4c75-9315-d6dc5dc76f55 Address tcp://127.0.0.1:42821 Status: Status.closing
-2022-08-26 14:08:40,434 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-4f6d7e3a-1065-4056-94f4-a542d7d26736 Address tcp://127.0.0.1:45875 Status: Status.closing
-2022-08-26 14:08:40,435 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:45875', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:40,435 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:45875
-2022-08-26 14:08:40,435 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:08:40,435 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:08:40,436 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:08:40,641 - distributed.utils_perf - WARNING - full garbage collections took 67% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_no_workers 2022-08-26 14:08:40,647 - distributed.scheduler - INFO - State start
-2022-08-26 14:08:40,649 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:08:40,649 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:35621
-2022-08-26 14:08:40,649 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:41181
-2022-08-26 14:08:40,652 - distributed.scheduler - INFO - Receive client connection: Client-42971cf3-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:40,652 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:40,725 - distributed.scheduler - INFO - Remove client Client-42971cf3-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:40,726 - distributed.scheduler - INFO - Remove client Client-42971cf3-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:40,726 - distributed.scheduler - INFO - Close client connection: Client-42971cf3-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:40,726 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:08:40,726 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:08:40,931 - distributed.utils_perf - WARNING - full garbage collections took 67% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_retire_workers_empty 2022-08-26 14:08:40,936 - distributed.scheduler - INFO - State start
-2022-08-26 14:08:40,938 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:08:40,938 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:35751
-2022-08-26 14:08:40,938 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:39303
-2022-08-26 14:08:40,939 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:08:40,939 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:08:41,144 - distributed.utils_perf - WARNING - full garbage collections took 69% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_server_listens_to_other_ops 2022-08-26 14:08:41,149 - distributed.scheduler - INFO - State start
-2022-08-26 14:08:41,151 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:08:41,151 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:42135
-2022-08-26 14:08:41,151 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:40543
-2022-08-26 14:08:41,156 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45869
-2022-08-26 14:08:41,156 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45869
-2022-08-26 14:08:41,156 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:08:41,156 - distributed.worker - INFO -          dashboard at:            127.0.0.1:44707
-2022-08-26 14:08:41,156 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:42135
-2022-08-26 14:08:41,156 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:41,156 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:41,156 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:41,156 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-dkmm5r0k
-2022-08-26 14:08:41,156 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:41,157 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36901
-2022-08-26 14:08:41,157 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36901
-2022-08-26 14:08:41,157 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:08:41,157 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34401
-2022-08-26 14:08:41,157 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:42135
-2022-08-26 14:08:41,157 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:41,157 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:08:41,157 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:41,157 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-jzsnfh2g
-2022-08-26 14:08:41,157 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:41,160 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45869', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:41,160 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45869
-2022-08-26 14:08:41,160 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:41,161 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36901', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:41,161 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36901
-2022-08-26 14:08:41,161 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:41,161 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:42135
-2022-08-26 14:08:41,161 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:41,161 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:42135
-2022-08-26 14:08:41,161 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:41,162 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:41,162 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:41,174 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:45869
-2022-08-26 14:08:41,175 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36901
-2022-08-26 14:08:41,176 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:45869', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:41,176 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:45869
-2022-08-26 14:08:41,176 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:36901', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:41,176 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:36901
-2022-08-26 14:08:41,176 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:08:41,176 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-775e3884-aa22-4d6c-b787-c23577bd7dd6 Address tcp://127.0.0.1:45869 Status: Status.closing
-2022-08-26 14:08:41,176 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-faffb45e-8ee5-4e2e-9291-0477b8252a64 Address tcp://127.0.0.1:36901 Status: Status.closing
-2022-08-26 14:08:41,177 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:08:41,177 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:08:41,381 - distributed.utils_perf - WARNING - full garbage collections took 69% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_remove_worker_from_scheduler 2022-08-26 14:08:41,387 - distributed.scheduler - INFO - State start
-2022-08-26 14:08:41,389 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:08:41,389 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:41537
-2022-08-26 14:08:41,389 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:45473
-2022-08-26 14:08:41,393 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38449
-2022-08-26 14:08:41,393 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38449
-2022-08-26 14:08:41,393 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:08:41,393 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46865
-2022-08-26 14:08:41,394 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:41537
-2022-08-26 14:08:41,394 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:41,394 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:41,394 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:41,394 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-oq8b3d5k
-2022-08-26 14:08:41,394 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:41,394 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34269
-2022-08-26 14:08:41,394 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34269
-2022-08-26 14:08:41,394 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:08:41,394 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41605
-2022-08-26 14:08:41,394 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:41537
-2022-08-26 14:08:41,395 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:41,395 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:08:41,395 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:41,395 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-56phzjqc
-2022-08-26 14:08:41,395 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:41,397 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38449', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:41,398 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38449
-2022-08-26 14:08:41,398 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:41,398 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:34269', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:41,398 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:34269
-2022-08-26 14:08:41,398 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:41,399 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:41537
-2022-08-26 14:08:41,399 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:41,399 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:41537
-2022-08-26 14:08:41,399 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:41,399 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:41,399 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:41,412 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:38449', name: 0, status: running, memory: 0, processing: 7>
-2022-08-26 14:08:41,412 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:38449
-2022-08-26 14:08:41,413 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:38449
-2022-08-26 14:08:41,413 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:34269
-2022-08-26 14:08:41,416 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:34269', name: 1, status: closing, memory: 0, processing: 20>
-2022-08-26 14:08:41,416 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:34269
-2022-08-26 14:08:41,416 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:08:41,417 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-48d00af0-0b0b-4333-b44a-6b7fc4378953 Address tcp://127.0.0.1:38449 Status: Status.closing
-2022-08-26 14:08:41,417 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-bf747764-0d3a-4419-9f3c-ae4e3dea2c1a Address tcp://127.0.0.1:34269 Status: Status.closing
-2022-08-26 14:08:41,419 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:08:41,420 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:08:41,625 - distributed.utils_perf - WARNING - full garbage collections took 69% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_remove_worker_by_name_from_scheduler 2022-08-26 14:08:41,631 - distributed.scheduler - INFO - State start
-2022-08-26 14:08:41,632 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:08:41,632 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:44405
-2022-08-26 14:08:41,632 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:42167
-2022-08-26 14:08:41,637 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:41619
-2022-08-26 14:08:41,637 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:41619
-2022-08-26 14:08:41,637 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:08:41,637 - distributed.worker - INFO -          dashboard at:            127.0.0.1:37083
-2022-08-26 14:08:41,637 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44405
-2022-08-26 14:08:41,637 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:41,637 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:41,637 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:41,637 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-cw2ki6q6
-2022-08-26 14:08:41,637 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:41,638 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40219
-2022-08-26 14:08:41,638 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40219
-2022-08-26 14:08:41,638 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:08:41,638 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42619
-2022-08-26 14:08:41,638 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44405
-2022-08-26 14:08:41,638 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:41,638 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:08:41,638 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:41,638 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-i9y7tvo8
-2022-08-26 14:08:41,638 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:41,641 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:41619', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:41,641 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:41619
-2022-08-26 14:08:41,641 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:41,642 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40219', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:41,642 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40219
-2022-08-26 14:08:41,642 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:41,642 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44405
-2022-08-26 14:08:41,642 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:41,643 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44405
-2022-08-26 14:08:41,643 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:41,643 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:41,643 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:41,654 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:41619', name: 0, status: running, memory: 0, processing: 0>
-2022-08-26 14:08:41,654 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:41619
-2022-08-26 14:08:41,654 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:41619
-2022-08-26 14:08:41,654 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:40219
-2022-08-26 14:08:41,656 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:40219', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:41,656 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:40219
-2022-08-26 14:08:41,656 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:08:41,656 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-cf70d0fd-c992-48a9-9c28-90727d8f8b2f Address tcp://127.0.0.1:41619 Status: Status.closing
-2022-08-26 14:08:41,656 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-59b32fea-5dc1-47a3-9069-786958c6d60a Address tcp://127.0.0.1:40219 Status: Status.closing
-2022-08-26 14:08:41,658 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:08:41,659 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:08:41,864 - distributed.utils_perf - WARNING - full garbage collections took 69% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_clear_events_worker_removal 2022-08-26 14:08:41,870 - distributed.scheduler - INFO - State start
-2022-08-26 14:08:41,871 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:08:41,871 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:40453
-2022-08-26 14:08:41,871 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:46565
-2022-08-26 14:08:41,876 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33089
-2022-08-26 14:08:41,876 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33089
-2022-08-26 14:08:41,876 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:08:41,876 - distributed.worker - INFO -          dashboard at:            127.0.0.1:37939
-2022-08-26 14:08:41,876 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:40453
-2022-08-26 14:08:41,876 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:41,876 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:41,876 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:41,876 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-65q9x49u
-2022-08-26 14:08:41,876 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:41,877 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:41521
-2022-08-26 14:08:41,877 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:41521
-2022-08-26 14:08:41,877 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:08:41,877 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42871
-2022-08-26 14:08:41,877 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:40453
-2022-08-26 14:08:41,877 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:41,877 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:08:41,877 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:41,877 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-2tenr7e3
-2022-08-26 14:08:41,877 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:41,880 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33089', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:41,880 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33089
-2022-08-26 14:08:41,880 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:41,881 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:41521', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:41,881 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:41521
-2022-08-26 14:08:41,881 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:41,881 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:40453
-2022-08-26 14:08:41,881 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:41,882 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:40453
-2022-08-26 14:08:41,882 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:41,882 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:41,882 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:41,893 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:33089', name: 0, status: running, memory: 0, processing: 0>
-2022-08-26 14:08:41,893 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:33089
-2022-08-26 14:08:41,893 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:33089
-2022-08-26 14:08:41,894 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-73a39c9e-62d5-4227-9fe4-94f529bbb7d8 Address tcp://127.0.0.1:33089 Status: Status.closing
-2022-08-26 14:08:41,914 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:41521
-2022-08-26 14:08:41,915 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:41521', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:41,915 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:41521
-2022-08-26 14:08:41,915 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:08:41,915 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-3fe65eb7-c345-47cd-af87-b13d60f72e1a Address tcp://127.0.0.1:41521 Status: Status.closing
-2022-08-26 14:08:41,915 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:08:41,916 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:08:42,120 - distributed.utils_perf - WARNING - full garbage collections took 69% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_clear_events_client_removal 2022-08-26 14:08:42,126 - distributed.scheduler - INFO - State start
-2022-08-26 14:08:42,128 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:08:42,128 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:46787
-2022-08-26 14:08:42,128 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:34677
-2022-08-26 14:08:42,132 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:46635
-2022-08-26 14:08:42,132 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:46635
-2022-08-26 14:08:42,132 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:08:42,132 - distributed.worker - INFO -          dashboard at:            127.0.0.1:38023
-2022-08-26 14:08:42,132 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:46787
-2022-08-26 14:08:42,132 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:42,132 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:42,133 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:42,133 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-rf2s4kqs
-2022-08-26 14:08:42,133 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:42,133 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36399
-2022-08-26 14:08:42,133 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36399
-2022-08-26 14:08:42,133 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:08:42,133 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34977
-2022-08-26 14:08:42,133 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:46787
-2022-08-26 14:08:42,133 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:42,133 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:08:42,133 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:42,133 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-81b8uklt
-2022-08-26 14:08:42,134 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:42,136 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:46635', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:42,137 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:46635
-2022-08-26 14:08:42,137 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:42,137 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36399', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:42,137 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36399
-2022-08-26 14:08:42,137 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:42,138 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:46787
-2022-08-26 14:08:42,138 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:42,138 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:46787
-2022-08-26 14:08:42,138 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:42,138 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:42,138 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:42,152 - distributed.scheduler - INFO - Receive client connection: Client-437bfee3-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:42,152 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:42,153 - distributed.scheduler - INFO - Remove client Client-437bfee3-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:42,153 - distributed.scheduler - INFO - Remove client Client-437bfee3-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:42,174 - distributed.scheduler - INFO - Remove client Client-437bfee3-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:42,174 - distributed.scheduler - INFO - Remove client Client-437bfee3-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:42,174 - distributed.scheduler - INFO - Close client connection: Client-437bfee3-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:42,175 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:46635
-2022-08-26 14:08:42,175 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36399
-2022-08-26 14:08:42,176 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:46635', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:42,176 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:46635
-2022-08-26 14:08:42,176 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:36399', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:42,176 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:36399
-2022-08-26 14:08:42,176 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:08:42,176 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-43878f2b-bbd1-4f93-b831-71d74115dafa Address tcp://127.0.0.1:46635 Status: Status.closing
-2022-08-26 14:08:42,177 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-4f9568fb-ab0b-4b18-85ce-56ba545a580b Address tcp://127.0.0.1:36399 Status: Status.closing
-2022-08-26 14:08:42,177 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:08:42,178 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:08:42,382 - distributed.utils_perf - WARNING - full garbage collections took 69% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_add_worker 2022-08-26 14:08:42,388 - distributed.scheduler - INFO - State start
-2022-08-26 14:08:42,390 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:08:42,390 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:43051
-2022-08-26 14:08:42,390 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:46193
-2022-08-26 14:08:42,394 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:41343
-2022-08-26 14:08:42,394 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:41343
-2022-08-26 14:08:42,394 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:08:42,394 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46715
-2022-08-26 14:08:42,394 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43051
-2022-08-26 14:08:42,394 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:42,394 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:42,394 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:42,395 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-99s1j97b
-2022-08-26 14:08:42,395 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:42,395 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:32931
-2022-08-26 14:08:42,395 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:32931
-2022-08-26 14:08:42,395 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:08:42,395 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35281
-2022-08-26 14:08:42,395 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43051
-2022-08-26 14:08:42,395 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:42,395 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:08:42,395 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:42,395 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-er0jaerf
-2022-08-26 14:08:42,395 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:42,398 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:41343', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:42,399 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:41343
-2022-08-26 14:08:42,399 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:42,399 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:32931', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:42,399 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:32931
-2022-08-26 14:08:42,399 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:42,400 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43051
-2022-08-26 14:08:42,400 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:42,400 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43051
-2022-08-26 14:08:42,400 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:42,400 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:42,400 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:42,414 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40465
-2022-08-26 14:08:42,414 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40465
-2022-08-26 14:08:42,414 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45747
-2022-08-26 14:08:42,414 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43051
-2022-08-26 14:08:42,415 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:42,415 - distributed.worker - INFO -               Threads:                          3
-2022-08-26 14:08:42,415 - distributed.worker - INFO -                Memory:                  15.71 GiB
-2022-08-26 14:08:42,415 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-thpc72gc
-2022-08-26 14:08:42,415 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:42,420 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40465', status: init, memory: 0, processing: 0>
-2022-08-26 14:08:42,420 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40465
-2022-08-26 14:08:42,420 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:42,421 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43051
-2022-08-26 14:08:42,421 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:42,421 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:40465
-2022-08-26 14:08:42,422 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:42,423 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-12545023-4c0a-4c81-9e5b-f7cb4310714a Address tcp://127.0.0.1:40465 Status: Status.closing
-2022-08-26 14:08:42,424 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:40465', status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:42,424 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:40465
-2022-08-26 14:08:42,425 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:41343
-2022-08-26 14:08:42,426 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:32931
-2022-08-26 14:08:42,427 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:32931', name: 1, status: closing, memory: 7, processing: 0>
-2022-08-26 14:08:42,427 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:32931
-2022-08-26 14:08:42,427 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-72ef42de-a85b-48cb-a599-5101c6baf502 Address tcp://127.0.0.1:41343 Status: Status.closing
-2022-08-26 14:08:42,427 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-8993f621-4286-40c8-b138-590caa2f75f2 Address tcp://127.0.0.1:32931 Status: Status.closing
-2022-08-26 14:08:42,428 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:41343', name: 0, status: closing, memory: 3, processing: 7>
-2022-08-26 14:08:42,428 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:41343
-2022-08-26 14:08:42,429 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:08:42,429 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:08:42,430 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:08:42,637 - distributed.utils_perf - WARNING - full garbage collections took 69% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_blocked_handlers_are_respected 2022-08-26 14:08:42,643 - distributed.scheduler - INFO - State start
-2022-08-26 14:08:42,644 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:08:42,644 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:37321
-2022-08-26 14:08:42,645 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:36085
-2022-08-26 14:08:42,649 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43085
-2022-08-26 14:08:42,649 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43085
-2022-08-26 14:08:42,649 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:08:42,649 - distributed.worker - INFO -          dashboard at:            127.0.0.1:38103
-2022-08-26 14:08:42,649 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37321
-2022-08-26 14:08:42,649 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:42,649 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:42,649 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:42,649 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-vzgm92i2
-2022-08-26 14:08:42,649 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:42,650 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35271
-2022-08-26 14:08:42,650 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35271
-2022-08-26 14:08:42,650 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:08:42,650 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34219
-2022-08-26 14:08:42,650 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37321
-2022-08-26 14:08:42,650 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:42,650 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:08:42,650 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:42,650 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-_udh7kxt
-2022-08-26 14:08:42,650 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:42,653 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:43085', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:42,654 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:43085
-2022-08-26 14:08:42,654 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:42,654 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:35271', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:42,654 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:35271
-2022-08-26 14:08:42,654 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:42,655 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37321
-2022-08-26 14:08:42,655 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:42,655 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37321
-2022-08-26 14:08:42,655 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:42,655 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:42,655 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:42,667 - distributed.core - ERROR - Exception while handling op feed
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 768, in _handle_comm
-    result = handler(**msg)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 92, in _raise
-    raise exc
-ValueError: The 'feed' handler has been explicitly disallowed in Scheduler, possibly due to security concerns.
-2022-08-26 14:08:42,669 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:43085
-2022-08-26 14:08:42,669 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:35271
-2022-08-26 14:08:42,670 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:43085', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:42,670 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:43085
-2022-08-26 14:08:42,670 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:35271', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:42,670 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:35271
-2022-08-26 14:08:42,670 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:08:42,670 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-c5a58b0e-a57f-4077-a5d8-aa6e88e7ca5c Address tcp://127.0.0.1:43085 Status: Status.closing
-2022-08-26 14:08:42,671 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-c3aaa684-d1dc-4837-a674-02065492e0e3 Address tcp://127.0.0.1:35271 Status: Status.closing
-2022-08-26 14:08:42,671 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:08:42,672 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:08:42,877 - distributed.utils_perf - WARNING - full garbage collections took 69% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_scheduler_init_pulls_blocked_handlers_from_config 2022-08-26 14:08:42,883 - distributed.scheduler - INFO - State start
-2022-08-26 14:08:42,884 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:08:42,884 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:41841
-2022-08-26 14:08:42,885 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:40699
-2022-08-26 14:08:42,885 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:08:42,885 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:08:43,089 - distributed.utils_perf - WARNING - full garbage collections took 70% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_feed 2022-08-26 14:08:43,095 - distributed.scheduler - INFO - State start
-2022-08-26 14:08:43,096 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:08:43,096 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:33867
-2022-08-26 14:08:43,096 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:44893
-2022-08-26 14:08:43,101 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45953
-2022-08-26 14:08:43,101 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45953
-2022-08-26 14:08:43,101 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:08:43,101 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36069
-2022-08-26 14:08:43,101 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33867
-2022-08-26 14:08:43,101 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:43,101 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:43,101 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:43,101 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-cn4ul97c
-2022-08-26 14:08:43,101 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:43,102 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43577
-2022-08-26 14:08:43,102 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43577
-2022-08-26 14:08:43,102 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:08:43,102 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36767
-2022-08-26 14:08:43,102 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33867
-2022-08-26 14:08:43,102 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:43,102 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:08:43,102 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:43,102 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-h0bwq28l
-2022-08-26 14:08:43,102 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:43,105 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45953', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:43,105 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45953
-2022-08-26 14:08:43,106 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:43,106 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:43577', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:43,106 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:43577
-2022-08-26 14:08:43,106 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:43,106 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33867
-2022-08-26 14:08:43,106 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:43,107 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33867
-2022-08-26 14:08:43,107 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:43,107 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:43,107 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:43,161 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:45953
-2022-08-26 14:08:43,162 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:43577
-2022-08-26 14:08:43,162 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:45953', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:43,163 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:45953
-2022-08-26 14:08:43,163 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:43577', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:43,163 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:43577
-2022-08-26 14:08:43,163 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:08:43,163 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-7387af3f-7495-4100-a276-ce10783bd0b1 Address tcp://127.0.0.1:45953 Status: Status.closing
-2022-08-26 14:08:43,163 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-2bc02792-f329-4787-b25e-29ae7b852122 Address tcp://127.0.0.1:43577 Status: Status.closing
-2022-08-26 14:08:43,164 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:08:43,164 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:08:43,165 - distributed.core - ERROR - 
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/utils.py", line 799, in wrapper
-    return await func(*args, **kwargs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/scheduler.py", line 6368, in feed
-    await asyncio.sleep(interval)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/tasks.py", line 605, in sleep
-    return await future
-asyncio.exceptions.CancelledError
-2022-08-26 14:08:43,369 - distributed.utils_perf - WARNING - full garbage collections took 70% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_feed_setup_teardown 2022-08-26 14:08:43,374 - distributed.scheduler - INFO - State start
-2022-08-26 14:08:43,376 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:08:43,376 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:43625
-2022-08-26 14:08:43,376 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:35081
-2022-08-26 14:08:43,381 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:41947
-2022-08-26 14:08:43,381 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:41947
-2022-08-26 14:08:43,381 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:08:43,381 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34895
-2022-08-26 14:08:43,381 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43625
-2022-08-26 14:08:43,381 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:43,381 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:43,381 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:43,381 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-61rmdxi7
-2022-08-26 14:08:43,381 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:43,382 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35845
-2022-08-26 14:08:43,382 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35845
-2022-08-26 14:08:43,382 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:08:43,382 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42191
-2022-08-26 14:08:43,382 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43625
-2022-08-26 14:08:43,382 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:43,382 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:08:43,382 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:43,382 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ok0bo9ge
-2022-08-26 14:08:43,382 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:43,385 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:41947', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:43,385 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:41947
-2022-08-26 14:08:43,385 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:43,386 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:35845', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:43,386 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:35845
-2022-08-26 14:08:43,386 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:43,386 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43625
-2022-08-26 14:08:43,386 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:43,386 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43625
-2022-08-26 14:08:43,386 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:43,387 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:43,387 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:43,451 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:41947
-2022-08-26 14:08:43,452 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:35845
-2022-08-26 14:08:43,453 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:41947', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:43,453 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:41947
-2022-08-26 14:08:43,453 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:35845', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:43,453 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:35845
-2022-08-26 14:08:43,453 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:08:43,453 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-2ddd82fb-b296-409f-8520-06afdb6855e5 Address tcp://127.0.0.1:41947 Status: Status.closing
-2022-08-26 14:08:43,454 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-62e6d052-3b1e-4c87-a7a5-9ced497b7e90 Address tcp://127.0.0.1:35845 Status: Status.closing
-2022-08-26 14:08:43,455 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:08:43,455 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:08:43,662 - distributed.utils_perf - WARNING - full garbage collections took 70% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_feed_large_bytestring 2022-08-26 14:08:43,668 - distributed.scheduler - INFO - State start
-2022-08-26 14:08:43,670 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:08:43,670 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:38923
-2022-08-26 14:08:43,670 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:35039
-2022-08-26 14:08:43,674 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40227
-2022-08-26 14:08:43,675 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40227
-2022-08-26 14:08:43,675 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:08:43,675 - distributed.worker - INFO -          dashboard at:            127.0.0.1:37477
-2022-08-26 14:08:43,675 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:38923
-2022-08-26 14:08:43,675 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:43,675 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:43,675 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:43,675 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-7r4tp709
-2022-08-26 14:08:43,675 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:43,675 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39617
-2022-08-26 14:08:43,675 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39617
-2022-08-26 14:08:43,675 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:08:43,676 - distributed.worker - INFO -          dashboard at:            127.0.0.1:37705
-2022-08-26 14:08:43,676 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:38923
-2022-08-26 14:08:43,676 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:43,676 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:08:43,676 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:43,676 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-2fhoemxc
-2022-08-26 14:08:43,676 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:43,679 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40227', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:43,679 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40227
-2022-08-26 14:08:43,679 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:43,679 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39617', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:43,680 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39617
-2022-08-26 14:08:43,680 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:43,680 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:38923
-2022-08-26 14:08:43,680 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:43,680 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:38923
-2022-08-26 14:08:43,680 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:43,680 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:43,681 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:43,978 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:40227
-2022-08-26 14:08:43,978 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:39617
-2022-08-26 14:08:43,979 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:40227', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:43,979 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:40227
-2022-08-26 14:08:43,979 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:39617', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:43,979 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:39617
-2022-08-26 14:08:43,979 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:08:43,979 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-6d72845d-7a58-4b52-952c-2555fdcba692 Address tcp://127.0.0.1:40227 Status: Status.closing
-2022-08-26 14:08:43,980 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-9bd4ff15-2df6-49b4-b390-8744bfc9a296 Address tcp://127.0.0.1:39617 Status: Status.closing
-2022-08-26 14:08:43,981 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:08:43,981 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:08:43,981 - distributed.core - ERROR - 
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/utils.py", line 799, in wrapper
-    return await func(*args, **kwargs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/scheduler.py", line 6368, in feed
-    await asyncio.sleep(interval)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/tasks.py", line 605, in sleep
-    return await future
-asyncio.exceptions.CancelledError
-2022-08-26 14:08:44,187 - distributed.utils_perf - WARNING - full garbage collections took 70% CPU time recently (threshold: 10%)
-2022-08-26 14:08:44,187 - distributed.utils_perf - INFO - full garbage collection released 152.59 MiB from 1359 reference cycles (threshold: 9.54 MiB)
-PASSED
-distributed/tests/test_scheduler.py::test_delete_data 2022-08-26 14:08:44,193 - distributed.scheduler - INFO - State start
-2022-08-26 14:08:44,195 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:08:44,195 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:44935
-2022-08-26 14:08:44,195 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:46405
-2022-08-26 14:08:44,199 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40843
-2022-08-26 14:08:44,199 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40843
-2022-08-26 14:08:44,200 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:08:44,200 - distributed.worker - INFO -          dashboard at:            127.0.0.1:38437
-2022-08-26 14:08:44,200 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44935
-2022-08-26 14:08:44,200 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:44,200 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:44,200 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:44,200 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-r7apadil
-2022-08-26 14:08:44,200 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:44,200 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37013
-2022-08-26 14:08:44,200 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37013
-2022-08-26 14:08:44,200 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:08:44,200 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36041
-2022-08-26 14:08:44,201 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44935
-2022-08-26 14:08:44,201 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:44,201 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:08:44,201 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:44,201 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-74j2eei_
-2022-08-26 14:08:44,201 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:44,204 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40843', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:44,204 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40843
-2022-08-26 14:08:44,204 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:44,204 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37013', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:44,204 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37013
-2022-08-26 14:08:44,205 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:44,205 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44935
-2022-08-26 14:08:44,205 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:44,205 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44935
-2022-08-26 14:08:44,205 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:44,205 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:44,205 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:44,219 - distributed.scheduler - INFO - Receive client connection: Client-44b76835-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:44,219 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:44,241 - distributed.scheduler - INFO - Remove client Client-44b76835-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:44,242 - distributed.scheduler - INFO - Remove client Client-44b76835-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:44,242 - distributed.scheduler - INFO - Close client connection: Client-44b76835-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:44,243 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:40843
-2022-08-26 14:08:44,243 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:37013
-2022-08-26 14:08:44,244 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:37013', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:44,244 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:37013
-2022-08-26 14:08:44,244 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-5cf592c5-73a1-4c6f-9a83-21f255b0251a Address tcp://127.0.0.1:37013 Status: Status.closing
-2022-08-26 14:08:44,244 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-602a2c1b-3706-4448-85d5-5aea11ba6f41 Address tcp://127.0.0.1:40843 Status: Status.closing
-2022-08-26 14:08:44,245 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:40843', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:44,245 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:40843
-2022-08-26 14:08:44,245 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:08:44,246 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:08:44,246 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:08:44,452 - distributed.utils_perf - WARNING - full garbage collections took 70% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_delete 2022-08-26 14:08:44,458 - distributed.scheduler - INFO - State start
-2022-08-26 14:08:44,460 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:08:44,460 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:38589
-2022-08-26 14:08:44,460 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:43003
-2022-08-26 14:08:44,463 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33981
-2022-08-26 14:08:44,463 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33981
-2022-08-26 14:08:44,463 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:08:44,463 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34351
-2022-08-26 14:08:44,463 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:38589
-2022-08-26 14:08:44,463 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:44,463 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:44,463 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:44,463 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ax_sucer
-2022-08-26 14:08:44,463 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:44,465 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33981', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:44,465 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33981
-2022-08-26 14:08:44,465 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:44,466 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:38589
-2022-08-26 14:08:44,466 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:44,466 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:44,479 - distributed.scheduler - INFO - Receive client connection: Client-44df20a7-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:44,480 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:44,495 - distributed.scheduler - INFO - Client Client-44df20a7-2583-11ed-a99d-00d861bc4509 requests to cancel 1 keys
-2022-08-26 14:08:44,495 - distributed.scheduler - INFO - Scheduler cancels key inc-03d935909bba38f9a49655e867cbf56a.  Force=False
-2022-08-26 14:08:44,517 - distributed.scheduler - INFO - Remove client Client-44df20a7-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:44,518 - distributed.scheduler - INFO - Remove client Client-44df20a7-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:44,518 - distributed.scheduler - INFO - Close client connection: Client-44df20a7-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:44,518 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:33981
-2022-08-26 14:08:44,519 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:33981', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:44,519 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:33981
-2022-08-26 14:08:44,519 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:08:44,519 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-4ed10a9b-c7b3-478d-b9b1-ff8d578243ad Address tcp://127.0.0.1:33981 Status: Status.closing
-2022-08-26 14:08:44,520 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:08:44,520 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:08:44,726 - distributed.utils_perf - WARNING - full garbage collections took 70% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_filtered_communication 2022-08-26 14:08:44,732 - distributed.scheduler - INFO - State start
-2022-08-26 14:08:44,734 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:08:44,734 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:45217
-2022-08-26 14:08:44,734 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:36457
-2022-08-26 14:08:44,738 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:44953
-2022-08-26 14:08:44,738 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:44953
-2022-08-26 14:08:44,738 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:08:44,738 - distributed.worker - INFO -          dashboard at:            127.0.0.1:37507
-2022-08-26 14:08:44,739 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:45217
-2022-08-26 14:08:44,739 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:44,739 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:44,739 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:44,739 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-go6obmzn
-2022-08-26 14:08:44,739 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:44,739 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:44781
-2022-08-26 14:08:44,739 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:44781
-2022-08-26 14:08:44,739 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:08:44,739 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39425
-2022-08-26 14:08:44,740 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:45217
-2022-08-26 14:08:44,740 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:44,740 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:08:44,740 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:44,740 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-4txy2v1v
-2022-08-26 14:08:44,740 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:44,743 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:44953', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:44,743 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:44953
-2022-08-26 14:08:44,743 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:44,743 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:44781', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:44,744 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:44781
-2022-08-26 14:08:44,744 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:44,744 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:45217
-2022-08-26 14:08:44,744 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:44,744 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:45217
-2022-08-26 14:08:44,744 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:44,745 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:44,745 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:44,758 - distributed.scheduler - INFO - Receive client connection: c
-2022-08-26 14:08:44,758 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:44,758 - distributed.scheduler - INFO - Receive client connection: f
-2022-08-26 14:08:44,759 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:44,770 - distributed.comm.tcp - WARNING - Closing dangling stream in <TCP  local=tcp://127.0.0.1:55076 remote=tcp://127.0.0.1:45217>
-2022-08-26 14:08:44,770 - distributed.comm.tcp - WARNING - Closing dangling stream in <TCP  local=tcp://127.0.0.1:55090 remote=tcp://127.0.0.1:45217>
-2022-08-26 14:08:44,770 - distributed.scheduler - INFO - Remove client c
-2022-08-26 14:08:44,770 - distributed.scheduler - INFO - Remove client f
-2022-08-26 14:08:44,771 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:44953
-2022-08-26 14:08:44,771 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:44781
-2022-08-26 14:08:44,771 - distributed.scheduler - INFO - Close client connection: c
-2022-08-26 14:08:44,772 - distributed.scheduler - INFO - Close client connection: f
-2022-08-26 14:08:44,773 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:44953', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:44,773 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:44953
-2022-08-26 14:08:44,773 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-5fc2a481-e235-472b-b442-aee64c797179 Address tcp://127.0.0.1:44953 Status: Status.closing
-2022-08-26 14:08:44,773 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:44781', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:44,773 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:44781
-2022-08-26 14:08:44,773 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:08:44,774 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-85f2c6c2-2021-45c1-a4dd-09eb9baf6b3c Address tcp://127.0.0.1:44781 Status: Status.closing
-2022-08-26 14:08:44,775 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:08:44,775 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:08:44,981 - distributed.utils_perf - WARNING - full garbage collections took 70% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_dumps_function PASSED
-distributed/tests/test_scheduler.py::test_dumps_task PASSED
-distributed/tests/test_scheduler.py::test_ready_remove_worker 2022-08-26 14:08:44,989 - distributed.scheduler - INFO - State start
-2022-08-26 14:08:44,990 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:08:44,990 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:34663
-2022-08-26 14:08:44,990 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:43145
-2022-08-26 14:08:44,995 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36105
-2022-08-26 14:08:44,995 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36105
-2022-08-26 14:08:44,995 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:08:44,995 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41575
-2022-08-26 14:08:44,995 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34663
-2022-08-26 14:08:44,995 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:44,995 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:44,995 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:44,995 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-2zzhn3gd
-2022-08-26 14:08:44,995 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:44,996 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42905
-2022-08-26 14:08:44,996 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42905
-2022-08-26 14:08:44,996 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:08:44,996 - distributed.worker - INFO -          dashboard at:            127.0.0.1:38431
-2022-08-26 14:08:44,996 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34663
-2022-08-26 14:08:44,996 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:44,996 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:08:44,996 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:44,996 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-rc65zuul
-2022-08-26 14:08:44,996 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:44,999 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36105', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:44,999 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36105
-2022-08-26 14:08:44,999 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:45,000 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42905', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:45,000 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42905
-2022-08-26 14:08:45,000 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:45,000 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34663
-2022-08-26 14:08:45,000 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:45,001 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34663
-2022-08-26 14:08:45,001 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:45,001 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:45,001 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:45,013 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:36105', name: 0, status: running, memory: 0, processing: 7>
-2022-08-26 14:08:45,014 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:36105
-2022-08-26 14:08:45,015 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36105
-2022-08-26 14:08:45,015 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42905
-2022-08-26 14:08:45,018 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:42905', name: 1, status: closing, memory: 0, processing: 20>
-2022-08-26 14:08:45,018 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:42905
-2022-08-26 14:08:45,018 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:08:45,019 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-2ebb4e04-38fe-4a77-b977-97e88356afc8 Address tcp://127.0.0.1:36105 Status: Status.closing
-2022-08-26 14:08:45,019 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-fdcdb96a-e6a4-4ee5-a309-edad446ffea0 Address tcp://127.0.0.1:42905 Status: Status.closing
-2022-08-26 14:08:45,021 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:08:45,021 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:08:45,229 - distributed.utils_perf - WARNING - full garbage collections took 70% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_restart 2022-08-26 14:08:45,235 - distributed.scheduler - INFO - State start
-2022-08-26 14:08:45,237 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:08:45,237 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:34567
-2022-08-26 14:08:45,237 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:40735
-2022-08-26 14:08:45,242 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:43713'
-2022-08-26 14:08:45,242 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:42451'
-2022-08-26 14:08:45,915 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36507
-2022-08-26 14:08:45,915 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36507
-2022-08-26 14:08:45,915 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:08:45,915 - distributed.worker - INFO -          dashboard at:            127.0.0.1:37927
-2022-08-26 14:08:45,915 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34567
-2022-08-26 14:08:45,915 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:45,915 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:45,915 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42197
-2022-08-26 14:08:45,916 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:45,916 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42197
-2022-08-26 14:08:45,916 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-8zemoxrv
-2022-08-26 14:08:45,916 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:08:45,916 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43643
-2022-08-26 14:08:45,916 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:45,916 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34567
-2022-08-26 14:08:45,916 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:45,916 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:08:45,916 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:45,916 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-1kca27ou
-2022-08-26 14:08:45,916 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:46,173 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42197', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:46,174 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42197
-2022-08-26 14:08:46,174 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:46,174 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34567
-2022-08-26 14:08:46,174 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:46,174 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36507', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:46,174 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:46,175 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36507
-2022-08-26 14:08:46,175 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:46,175 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34567
-2022-08-26 14:08:46,175 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:46,175 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:46,185 - distributed.scheduler - INFO - Receive client connection: Client-45e3601a-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:46,185 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:46,431 - distributed.scheduler - INFO - Releasing all requested keys
-2022-08-26 14:08:46,431 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:08:46,434 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:08:46,435 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:08:46,435 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36507
-2022-08-26 14:08:46,435 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42197
-2022-08-26 14:08:46,436 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-344863bb-ec05-44fd-bf06-06ff48b50f86 Address tcp://127.0.0.1:36507 Status: Status.closing
-2022-08-26 14:08:46,436 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:36507', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:46,436 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:36507
-2022-08-26 14:08:46,436 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-2d36be77-91eb-4761-a1bb-591d94d8755d Address tcp://127.0.0.1:42197 Status: Status.closing
-2022-08-26 14:08:46,436 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:42197', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:46,436 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:42197
-2022-08-26 14:08:46,436 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:08:46,604 - distributed.nanny - WARNING - Restarting worker
-2022-08-26 14:08:46,605 - distributed.nanny - WARNING - Restarting worker
-2022-08-26 14:08:47,285 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40781
-2022-08-26 14:08:47,285 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40781
-2022-08-26 14:08:47,285 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:08:47,285 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34771
-2022-08-26 14:08:47,285 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34567
-2022-08-26 14:08:47,285 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:47,285 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:47,285 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:47,285 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-iotalils
-2022-08-26 14:08:47,285 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:47,286 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42803
-2022-08-26 14:08:47,286 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42803
-2022-08-26 14:08:47,286 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:08:47,286 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40071
-2022-08-26 14:08:47,286 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34567
-2022-08-26 14:08:47,286 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:47,286 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:08:47,286 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:47,287 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-1xlvle8n
-2022-08-26 14:08:47,287 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:47,521 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42803', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:47,521 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42803
-2022-08-26 14:08:47,521 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:47,521 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34567
-2022-08-26 14:08:47,522 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:47,522 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:47,539 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40781', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:47,539 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40781
-2022-08-26 14:08:47,539 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:47,539 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34567
-2022-08-26 14:08:47,539 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:47,540 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:47,859 - distributed.scheduler - INFO - Remove client Client-45e3601a-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:47,859 - distributed.scheduler - INFO - Remove client Client-45e3601a-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:47,859 - distributed.scheduler - INFO - Close client connection: Client-45e3601a-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:47,860 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:43713'.
-2022-08-26 14:08:47,860 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:08:47,860 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:42451'.
-2022-08-26 14:08:47,860 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:08:47,860 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:40781
-2022-08-26 14:08:47,861 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42803
-2022-08-26 14:08:47,861 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-ed15c578-ad9d-4adf-b9a7-56caafe880f2 Address tcp://127.0.0.1:40781 Status: Status.closing
-2022-08-26 14:08:47,862 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-704b2bb2-0d60-468d-9f62-4703591d670a Address tcp://127.0.0.1:42803 Status: Status.closing
-2022-08-26 14:08:47,861 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:40781', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:47,862 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:40781
-2022-08-26 14:08:47,862 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:42803', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:47,862 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:42803
-2022-08-26 14:08:47,863 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:08:48,064 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:08:48,064 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:08:48,273 - distributed.utils_perf - WARNING - full garbage collections took 70% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_restart_waits_for_new_workers SKIPPED
-distributed/tests/test_scheduler.py::test_restart_nanny_timeout_exceeded 2022-08-26 14:08:48,280 - distributed.scheduler - INFO - State start
-2022-08-26 14:08:48,281 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:08:48,282 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:37789
-2022-08-26 14:08:48,282 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:39575
-2022-08-26 14:08:48,287 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:37385'
-2022-08-26 14:08:48,287 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:33051'
-2022-08-26 14:08:48,995 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:44233
-2022-08-26 14:08:48,995 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:44233
-2022-08-26 14:08:48,996 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:08:48,996 - distributed.worker - INFO -          dashboard at:            127.0.0.1:38533
-2022-08-26 14:08:48,996 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37789
-2022-08-26 14:08:48,996 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:48,996 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:48,996 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:48,996 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-w69sdgyl
-2022-08-26 14:08:48,996 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:48,996 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40033
-2022-08-26 14:08:48,996 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40033
-2022-08-26 14:08:48,996 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:08:48,996 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43497
-2022-08-26 14:08:48,996 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37789
-2022-08-26 14:08:48,996 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:48,996 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:48,996 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:48,996 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-m7gmy5pz
-2022-08-26 14:08:48,996 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:49,253 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:44233', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:49,254 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:44233
-2022-08-26 14:08:49,254 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:49,254 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37789
-2022-08-26 14:08:49,254 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:49,254 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40033', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:49,254 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:49,254 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40033
-2022-08-26 14:08:49,254 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:49,254 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37789
-2022-08-26 14:08:49,255 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:49,255 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:49,280 - distributed.scheduler - INFO - Receive client connection: Client-47bb9dea-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:49,280 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:49,592 - distributed.worker - WARNING - Compute Failed
-Key:       div-beaac0206246b34d3625d21194e03c13
-Function:  div
-args:      (1, 0)
-kwargs:    {}
-Exception: "ZeroDivisionError('division by zero')"
-
-2022-08-26 14:08:49,594 - distributed.scheduler - INFO - Releasing all requested keys
-2022-08-26 14:08:49,594 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:08:50,595 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:40033', name: 0, status: running, memory: 0, processing: 0>
-2022-08-26 14:08:50,595 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:40033
-2022-08-26 14:08:50,596 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:44233', name: 1, status: running, memory: 0, processing: 0>
-2022-08-26 14:08:50,596 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:44233
-2022-08-26 14:08:50,596 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:08:50,596 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:40033
-2022-08-26 14:08:50,596 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:44233
-2022-08-26 14:08:50,598 - distributed.core - ERROR - 2/2 nanny worker(s) did not shut down within 1s
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/utils.py", line 799, in wrapper
-    return await func(*args, **kwargs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/scheduler.py", line 5258, in restart
-    raise TimeoutError(
-asyncio.exceptions.TimeoutError: 2/2 nanny worker(s) did not shut down within 1s
-2022-08-26 14:08:50,598 - distributed.core - ERROR - Exception while handling op restart
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 770, in _handle_comm
-    result = await result
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/utils.py", line 799, in wrapper
-    return await func(*args, **kwargs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/scheduler.py", line 5258, in restart
-    raise TimeoutError(
-asyncio.exceptions.TimeoutError: 2/2 nanny worker(s) did not shut down within 1s
-2022-08-26 14:08:50,600 - distributed.core - ERROR - Exception while handling op kill
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/locks.py", line 214, in wait
-    await fut
-asyncio.exceptions.CancelledError
-
-During handling of the above exception, another exception occurred:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/tasks.py", line 456, in wait_for
-    return fut.result()
-asyncio.exceptions.CancelledError
-
-The above exception was the direct cause of the following exception:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 770, in _handle_comm
-    result = await result
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/tests/test_scheduler.py", line 666, in kill
-    await asyncio.wait_for(self.kill_proceed.wait(), timeout)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/tasks.py", line 458, in wait_for
-    raise exceptions.TimeoutError() from exc
-asyncio.exceptions.TimeoutError
-2022-08-26 14:08:50,601 - distributed.core - ERROR - Exception while handling op kill
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/locks.py", line 214, in wait
-    await fut
-asyncio.exceptions.CancelledError
-
-During handling of the above exception, another exception occurred:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/tasks.py", line 456, in wait_for
-    return fut.result()
-asyncio.exceptions.CancelledError
-
-The above exception was the direct cause of the following exception:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 770, in _handle_comm
-    result = await result
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/tests/test_scheduler.py", line 666, in kill
-    await asyncio.wait_for(self.kill_proceed.wait(), timeout)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/tasks.py", line 458, in wait_for
-    raise exceptions.TimeoutError() from exc
-asyncio.exceptions.TimeoutError
-2022-08-26 14:08:50,602 - distributed.scheduler - INFO - Remove client Client-47bb9dea-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:50,602 - distributed.scheduler - INFO - Remove client Client-47bb9dea-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:50,602 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-32ed7ecb-0a3a-4ddc-89ec-9e9b8030a2b0 Address tcp://127.0.0.1:40033 Status: Status.closing
-2022-08-26 14:08:50,603 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-15aeeb03-a614-4d5d-81ac-381f3eb47831 Address tcp://127.0.0.1:44233 Status: Status.closing
-2022-08-26 14:08:50,603 - distributed.scheduler - INFO - Close client connection: Client-47bb9dea-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:50,603 - distributed.nanny - INFO - Worker closed
-2022-08-26 14:08:50,603 - distributed.nanny - INFO - Worker closed
-2022-08-26 14:08:50,604 - distributed.nanny - ERROR - Worker process died unexpectedly
-2022-08-26 14:08:50,604 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:37385'.
-2022-08-26 14:08:50,604 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:33051'.
-2022-08-26 14:08:55,604 - distributed.nanny - ERROR - Error in Nanny killing Worker subprocess
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/locks.py", line 214, in wait
-    await fut
-asyncio.exceptions.CancelledError
-
-During handling of the above exception, another exception occurred:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/tasks.py", line 456, in wait_for
-    return fut.result()
-asyncio.exceptions.CancelledError
-
-The above exception was the direct cause of the following exception:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/nanny.py", line 595, in close
-    await self.kill(timeout=timeout)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/tests/test_scheduler.py", line 666, in kill
-    await asyncio.wait_for(self.kill_proceed.wait(), timeout)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/tasks.py", line 458, in wait_for
-    raise exceptions.TimeoutError() from exc
-asyncio.exceptions.TimeoutError
-2022-08-26 14:08:55,605 - distributed.nanny - ERROR - Error in Nanny killing Worker subprocess
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/locks.py", line 214, in wait
-    await fut
-asyncio.exceptions.CancelledError
-
-During handling of the above exception, another exception occurred:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/tasks.py", line 456, in wait_for
-    return fut.result()
-asyncio.exceptions.CancelledError
-
-The above exception was the direct cause of the following exception:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/nanny.py", line 595, in close
-    await self.kill(timeout=timeout)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/tests/test_scheduler.py", line 666, in kill
-    await asyncio.wait_for(self.kill_proceed.wait(), timeout)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/tasks.py", line 458, in wait_for
-    raise exceptions.TimeoutError() from exc
-asyncio.exceptions.TimeoutError
-2022-08-26 14:08:55,605 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:08:55,605 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:08:55,814 - distributed.utils_perf - WARNING - full garbage collections took 70% CPU time recently (threshold: 10%)
-kill called
-kill called
-kill called
-kill called
-PASSED
-distributed/tests/test_scheduler.py::test_restart_not_all_workers_return 2022-08-26 14:08:55,820 - distributed.scheduler - INFO - State start
-2022-08-26 14:08:55,821 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:08:55,821 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:40609
-2022-08-26 14:08:55,821 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:37117
-2022-08-26 14:08:55,826 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45819
-2022-08-26 14:08:55,826 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45819
-2022-08-26 14:08:55,826 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:08:55,826 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35359
-2022-08-26 14:08:55,826 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:40609
-2022-08-26 14:08:55,826 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:55,826 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:55,826 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:55,826 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ixvtth9e
-2022-08-26 14:08:55,826 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:55,827 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40259
-2022-08-26 14:08:55,827 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40259
-2022-08-26 14:08:55,827 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:08:55,827 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36957
-2022-08-26 14:08:55,827 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:40609
-2022-08-26 14:08:55,827 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:55,827 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:55,827 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:55,827 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ji180oyt
-2022-08-26 14:08:55,827 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:55,830 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45819', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:55,831 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45819
-2022-08-26 14:08:55,831 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:55,831 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40259', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:55,831 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40259
-2022-08-26 14:08:55,831 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:55,832 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:40609
-2022-08-26 14:08:55,832 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:55,832 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:40609
-2022-08-26 14:08:55,832 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:55,832 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:55,832 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:55,846 - distributed.scheduler - INFO - Receive client connection: Client-4ba5824d-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:55,846 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:55,847 - distributed.scheduler - INFO - Releasing all requested keys
-2022-08-26 14:08:55,847 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:08:55,847 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:40259', name: 1, status: running, memory: 0, processing: 0>
-2022-08-26 14:08:55,847 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:40259
-2022-08-26 14:08:55,848 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:45819', name: 0, status: running, memory: 0, processing: 0>
-2022-08-26 14:08:55,848 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:45819
-2022-08-26 14:08:55,848 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:08:55,848 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:40259
-2022-08-26 14:08:55,849 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:45819
-2022-08-26 14:08:55,850 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-9b5941ac-239c-4399-82bb-c35cc145581f Address tcp://127.0.0.1:40259 Status: Status.closing
-2022-08-26 14:08:55,850 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-cf55e4d7-d91c-44ef-a363-64bd08d02b0f Address tcp://127.0.0.1:45819 Status: Status.closing
-2022-08-26 14:08:56,853 - distributed.core - ERROR - Waited for 2 worker(s) to reconnect after restarting, but after 1s, only 0 have returned. Consider a longer timeout, or `wait_for_workers=False`. The 2 worker(s) not using Nannies were just shut down instead of restarted (restart is only possible with Nannies). If your deployment system does not automatically re-launch terminated processes, then those workers will never come back, and `Client.restart` will always time out. Do not use `Client.restart` in that case.
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/utils.py", line 799, in wrapper
-    return await func(*args, **kwargs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/scheduler.py", line 5285, in restart
-    raise TimeoutError(msg) from None
-asyncio.exceptions.TimeoutError: Waited for 2 worker(s) to reconnect after restarting, but after 1s, only 0 have returned. Consider a longer timeout, or `wait_for_workers=False`. The 2 worker(s) not using Nannies were just shut down instead of restarted (restart is only possible with Nannies). If your deployment system does not automatically re-launch terminated processes, then those workers will never come back, and `Client.restart` will always time out. Do not use `Client.restart` in that case.
-2022-08-26 14:08:56,853 - distributed.core - ERROR - Exception while handling op restart
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 770, in _handle_comm
-    result = await result
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/utils.py", line 799, in wrapper
-    return await func(*args, **kwargs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/scheduler.py", line 5285, in restart
-    raise TimeoutError(msg) from None
-asyncio.exceptions.TimeoutError: Waited for 2 worker(s) to reconnect after restarting, but after 1s, only 0 have returned. Consider a longer timeout, or `wait_for_workers=False`. The 2 worker(s) not using Nannies were just shut down instead of restarted (restart is only possible with Nannies). If your deployment system does not automatically re-launch terminated processes, then those workers will never come back, and `Client.restart` will always time out. Do not use `Client.restart` in that case.
-2022-08-26 14:08:56,856 - distributed.scheduler - INFO - Remove client Client-4ba5824d-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:56,856 - distributed.scheduler - INFO - Remove client Client-4ba5824d-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:56,856 - distributed.scheduler - INFO - Close client connection: Client-4ba5824d-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:56,857 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:08:56,857 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:08:57,070 - distributed.utils_perf - WARNING - full garbage collections took 70% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_restart_worker_rejoins_after_timeout_expired 2022-08-26 14:08:57,076 - distributed.scheduler - INFO - State start
-2022-08-26 14:08:57,078 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:08:57,078 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:41557
-2022-08-26 14:08:57,078 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:34097
-2022-08-26 14:08:57,081 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39897
-2022-08-26 14:08:57,081 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39897
-2022-08-26 14:08:57,081 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:08:57,081 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41019
-2022-08-26 14:08:57,081 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:41557
-2022-08-26 14:08:57,081 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:57,081 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:57,081 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:57,081 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ykvmdqh9
-2022-08-26 14:08:57,081 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:57,083 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39897', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:57,083 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39897
-2022-08-26 14:08:57,083 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:57,083 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:41557
-2022-08-26 14:08:57,084 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:57,084 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:57,097 - distributed.scheduler - INFO - Receive client connection: Client-4c647311-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:57,098 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:57,099 - distributed.scheduler - INFO - Releasing all requested keys
-2022-08-26 14:08:57,099 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:08:57,099 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:39897', name: 0, status: running, memory: 0, processing: 0>
-2022-08-26 14:08:57,099 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:39897
-2022-08-26 14:08:57,102 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45105
-2022-08-26 14:08:57,102 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45105
-2022-08-26 14:08:57,102 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33401
-2022-08-26 14:08:57,102 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:41557
-2022-08-26 14:08:57,102 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:57,102 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:57,102 - distributed.worker - INFO -                Memory:                   5.24 GiB
-2022-08-26 14:08:57,102 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-4bxp4eiz
-2022-08-26 14:08:57,102 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:57,103 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:39897
-2022-08-26 14:08:57,104 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-572a1a3c-353b-4c20-a7e0-729f366b4e42 Address tcp://127.0.0.1:39897 Status: Status.closing
-2022-08-26 14:08:57,105 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45105', status: init, memory: 0, processing: 0>
-2022-08-26 14:08:57,106 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45105
-2022-08-26 14:08:57,106 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:57,106 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:41557
-2022-08-26 14:08:57,106 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:57,106 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:57,107 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:45105
-2022-08-26 14:08:57,108 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-354dfaf4-061b-4156-be3a-e65ebfbe0485 Address tcp://127.0.0.1:45105 Status: Status.closing
-2022-08-26 14:08:57,108 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:45105', status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:57,108 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:45105
-2022-08-26 14:08:57,108 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:08:57,109 - distributed.scheduler - INFO - Remove client Client-4c647311-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:57,109 - distributed.scheduler - INFO - Remove client Client-4c647311-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:57,109 - distributed.scheduler - INFO - Close client connection: Client-4c647311-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:57,110 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:08:57,110 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:08:57,319 - distributed.utils_perf - WARNING - full garbage collections took 73% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_restart_no_wait_for_workers 2022-08-26 14:08:57,325 - distributed.scheduler - INFO - State start
-2022-08-26 14:08:57,326 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:08:57,326 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:37549
-2022-08-26 14:08:57,327 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:39525
-2022-08-26 14:08:57,331 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40917
-2022-08-26 14:08:57,331 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40917
-2022-08-26 14:08:57,331 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:08:57,331 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34753
-2022-08-26 14:08:57,331 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37549
-2022-08-26 14:08:57,331 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:57,331 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:57,331 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:57,331 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-byox7dcn
-2022-08-26 14:08:57,332 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:57,332 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:46407
-2022-08-26 14:08:57,332 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:46407
-2022-08-26 14:08:57,332 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:08:57,332 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45763
-2022-08-26 14:08:57,332 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37549
-2022-08-26 14:08:57,332 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:57,332 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:57,332 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:57,332 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-htbvnm_f
-2022-08-26 14:08:57,332 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:57,335 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40917', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:57,336 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40917
-2022-08-26 14:08:57,336 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:57,336 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:46407', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:57,336 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:46407
-2022-08-26 14:08:57,336 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:57,337 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37549
-2022-08-26 14:08:57,337 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:57,337 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37549
-2022-08-26 14:08:57,337 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:57,337 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:57,337 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:57,351 - distributed.scheduler - INFO - Receive client connection: Client-4c8b2d3b-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:57,352 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:57,353 - distributed.scheduler - INFO - Releasing all requested keys
-2022-08-26 14:08:57,353 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:08:57,353 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:40917', name: 0, status: running, memory: 0, processing: 0>
-2022-08-26 14:08:57,353 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:40917
-2022-08-26 14:08:57,353 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:46407', name: 1, status: running, memory: 0, processing: 0>
-2022-08-26 14:08:57,353 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:46407
-2022-08-26 14:08:57,353 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:08:57,354 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:40917
-2022-08-26 14:08:57,354 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:46407
-2022-08-26 14:08:57,356 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-c84897af-8dbe-48ac-839f-c6b600c8eb06 Address tcp://127.0.0.1:40917 Status: Status.closing
-2022-08-26 14:08:57,356 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-2606c6b7-6617-40b3-a07c-279a7f033b5f Address tcp://127.0.0.1:46407 Status: Status.closing
-2022-08-26 14:08:57,363 - distributed.scheduler - INFO - Remove client Client-4c8b2d3b-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:57,363 - distributed.scheduler - INFO - Remove client Client-4c8b2d3b-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:57,364 - distributed.scheduler - INFO - Close client connection: Client-4c8b2d3b-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:57,364 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:08:57,364 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:08:57,571 - distributed.utils_perf - WARNING - full garbage collections took 75% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_restart_some_nannies_some_not SKIPPED
-distributed/tests/test_scheduler.py::test_restart_heartbeat_before_closing 2022-08-26 14:08:57,578 - distributed.scheduler - INFO - State start
-2022-08-26 14:08:57,580 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:08:57,580 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:33901
-2022-08-26 14:08:57,580 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:37317
-2022-08-26 14:08:57,583 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:36503'
-2022-08-26 14:08:58,268 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38507
-2022-08-26 14:08:58,268 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38507
-2022-08-26 14:08:58,268 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:08:58,268 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35301
-2022-08-26 14:08:58,268 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33901
-2022-08-26 14:08:58,268 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:58,268 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:58,268 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:58,268 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-u_ug65v1
-2022-08-26 14:08:58,268 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:58,523 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38507', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:08:58,524 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38507
-2022-08-26 14:08:58,524 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:58,524 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33901
-2022-08-26 14:08:58,524 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:58,525 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:58,578 - distributed.scheduler - INFO - Receive client connection: Client-4d46649e-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:08:58,579 - distributed.core - INFO - Starting established connection
-2022-08-26 14:08:58,579 - distributed.scheduler - INFO - Releasing all requested keys
-2022-08-26 14:08:58,579 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:08:59,083 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:08:59,084 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:38507
-2022-08-26 14:08:59,085 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-c3f17a2a-4b7a-49ef-968b-42a2895733be Address tcp://127.0.0.1:38507 Status: Status.closing
-2022-08-26 14:08:59,085 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:38507', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:08:59,085 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:38507
-2022-08-26 14:08:59,085 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:08:59,220 - distributed.nanny - WARNING - Restarting worker
-kill called
-kill proceed
-2022-08-26 14:08:59,896 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38993
-2022-08-26 14:08:59,896 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38993
-2022-08-26 14:08:59,896 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:08:59,896 - distributed.worker - INFO -          dashboard at:            127.0.0.1:37073
-2022-08-26 14:08:59,896 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33901
-2022-08-26 14:08:59,896 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:08:59,896 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:08:59,896 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:08:59,896 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-nrgphx89
-2022-08-26 14:08:59,897 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:00,151 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38993', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:00,152 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38993
-2022-08-26 14:09:00,152 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:00,152 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33901
-2022-08-26 14:09:00,152 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:00,152 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:00,225 - distributed.scheduler - INFO - Remove client Client-4d46649e-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:00,226 - distributed.scheduler - INFO - Remove client Client-4d46649e-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:00,226 - distributed.scheduler - INFO - Close client connection: Client-4d46649e-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:00,226 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:36503'.
-2022-08-26 14:09:00,226 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:09:00,227 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:38993
-2022-08-26 14:09:00,228 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-9cf5f363-aacc-46e8-8e45-233ffb7fd319 Address tcp://127.0.0.1:38993 Status: Status.closing
-2022-08-26 14:09:00,228 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:38993', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:00,228 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:38993
-2022-08-26 14:09:00,228 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:09:00,364 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:00,364 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:00,574 - distributed.utils_perf - WARNING - full garbage collections took 77% CPU time recently (threshold: 10%)
-kill called
-kill proceed
-PASSED
-distributed/tests/test_scheduler.py::test_broadcast 2022-08-26 14:09:00,579 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:00,581 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:00,581 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:46739
-2022-08-26 14:09:00,581 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:40781
-2022-08-26 14:09:00,586 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45801
-2022-08-26 14:09:00,586 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45801
-2022-08-26 14:09:00,586 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:09:00,586 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39407
-2022-08-26 14:09:00,586 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:46739
-2022-08-26 14:09:00,586 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:00,586 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:00,586 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:00,586 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ewt8lbdk
-2022-08-26 14:09:00,586 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:00,587 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34107
-2022-08-26 14:09:00,587 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34107
-2022-08-26 14:09:00,587 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:09:00,587 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43951
-2022-08-26 14:09:00,587 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:46739
-2022-08-26 14:09:00,587 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:00,587 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:09:00,587 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:00,587 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-40wy06v5
-2022-08-26 14:09:00,587 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:00,590 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45801', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:00,590 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45801
-2022-08-26 14:09:00,590 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:00,590 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:34107', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:00,591 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:34107
-2022-08-26 14:09:00,591 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:00,591 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:46739
-2022-08-26 14:09:00,591 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:00,591 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:46739
-2022-08-26 14:09:00,591 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:00,592 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:00,592 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:00,609 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:45801
-2022-08-26 14:09:00,609 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:34107
-2022-08-26 14:09:00,610 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:45801', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:00,610 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:45801
-2022-08-26 14:09:00,610 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:34107', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:00,610 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:34107
-2022-08-26 14:09:00,610 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:09:00,610 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-34cbd479-4417-49fc-b99e-7dbb891256f5 Address tcp://127.0.0.1:45801 Status: Status.closing
-2022-08-26 14:09:00,611 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-606a920e-ce93-4970-97bf-cd7d3ce5e0de Address tcp://127.0.0.1:34107 Status: Status.closing
-2022-08-26 14:09:00,612 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:00,612 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:00,818 - distributed.utils_perf - WARNING - full garbage collections took 80% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_broadcast_tls 2022-08-26 14:09:00,824 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:00,826 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:00,826 - distributed.scheduler - INFO -   Scheduler at:     tls://127.0.0.1:46521
-2022-08-26 14:09:00,826 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:43973
-2022-08-26 14:09:00,832 - distributed.worker - INFO -       Start worker at:      tls://127.0.0.1:41987
-2022-08-26 14:09:00,832 - distributed.worker - INFO -          Listening to:      tls://127.0.0.1:41987
-2022-08-26 14:09:00,833 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:09:00,833 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35215
-2022-08-26 14:09:00,833 - distributed.worker - INFO - Waiting to connect to:      tls://127.0.0.1:46521
-2022-08-26 14:09:00,833 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:00,833 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:00,833 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:00,833 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-23llkfhe
-2022-08-26 14:09:00,833 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:00,834 - distributed.worker - INFO -       Start worker at:      tls://127.0.0.1:40089
-2022-08-26 14:09:00,834 - distributed.worker - INFO -          Listening to:      tls://127.0.0.1:40089
-2022-08-26 14:09:00,834 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:09:00,834 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35137
-2022-08-26 14:09:00,834 - distributed.worker - INFO - Waiting to connect to:      tls://127.0.0.1:46521
-2022-08-26 14:09:00,834 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:00,834 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:09:00,834 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:00,834 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-rxhrc5kc
-2022-08-26 14:09:00,834 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:00,842 - distributed.scheduler - INFO - Register worker <WorkerState 'tls://127.0.0.1:41987', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:00,843 - distributed.scheduler - INFO - Starting worker compute stream, tls://127.0.0.1:41987
-2022-08-26 14:09:00,843 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:00,843 - distributed.scheduler - INFO - Register worker <WorkerState 'tls://127.0.0.1:40089', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:00,843 - distributed.scheduler - INFO - Starting worker compute stream, tls://127.0.0.1:40089
-2022-08-26 14:09:00,843 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:00,844 - distributed.worker - INFO -         Registered to:      tls://127.0.0.1:46521
-2022-08-26 14:09:00,844 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:00,844 - distributed.worker - INFO -         Registered to:      tls://127.0.0.1:46521
-2022-08-26 14:09:00,844 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:00,844 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:00,844 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:00,873 - distributed.worker - INFO - Stopping worker at tls://127.0.0.1:41987
-2022-08-26 14:09:00,874 - distributed.worker - INFO - Stopping worker at tls://127.0.0.1:40089
-2022-08-26 14:09:00,875 - distributed.scheduler - INFO - Remove worker <WorkerState 'tls://127.0.0.1:41987', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:00,875 - distributed.core - INFO - Removing comms to tls://127.0.0.1:41987
-2022-08-26 14:09:00,875 - distributed.scheduler - INFO - Remove worker <WorkerState 'tls://127.0.0.1:40089', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:00,875 - distributed.core - INFO - Removing comms to tls://127.0.0.1:40089
-2022-08-26 14:09:00,875 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:09:00,875 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-b27d9a86-d93f-4740-a800-609033237d12 Address tls://127.0.0.1:41987 Status: Status.closing
-2022-08-26 14:09:00,876 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-a14950a7-8da8-43af-b3ab-5ef68295d564 Address tls://127.0.0.1:40089 Status: Status.closing
-2022-08-26 14:09:00,876 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:00,877 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:01,082 - distributed.utils_perf - WARNING - full garbage collections took 82% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_broadcast_nanny 2022-08-26 14:09:01,088 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:01,090 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:01,090 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:40415
-2022-08-26 14:09:01,090 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:36197
-2022-08-26 14:09:01,095 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:42003'
-2022-08-26 14:09:01,095 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:37269'
-2022-08-26 14:09:01,767 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43041
-2022-08-26 14:09:01,767 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43041
-2022-08-26 14:09:01,767 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:09:01,767 - distributed.worker - INFO -          dashboard at:            127.0.0.1:44031
-2022-08-26 14:09:01,767 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:40415
-2022-08-26 14:09:01,767 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:01,767 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:09:01,767 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:01,767 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-bou3b49i
-2022-08-26 14:09:01,767 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:01,768 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35897
-2022-08-26 14:09:01,768 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35897
-2022-08-26 14:09:01,768 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:09:01,768 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42873
-2022-08-26 14:09:01,768 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:40415
-2022-08-26 14:09:01,768 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:01,768 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:01,768 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:01,768 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-2vm5ym2x
-2022-08-26 14:09:01,768 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:02,007 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:35897', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:02,008 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:35897
-2022-08-26 14:09:02,008 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:02,008 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:40415
-2022-08-26 14:09:02,008 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:02,008 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:02,025 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:43041', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:02,026 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:43041
-2022-08-26 14:09:02,026 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:02,026 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:40415
-2022-08-26 14:09:02,026 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:02,026 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:02,041 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:42003'.
-2022-08-26 14:09:02,041 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:09:02,041 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:37269'.
-2022-08-26 14:09:02,042 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:09:02,042 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:35897
-2022-08-26 14:09:02,042 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:43041
-2022-08-26 14:09:02,043 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-10877096-d436-43fc-b15f-27787b3f11ee Address tcp://127.0.0.1:35897 Status: Status.closing
-2022-08-26 14:09:02,043 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:35897', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:02,043 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-9c15bc79-8af9-4d85-b0a7-4b1b5b1b1276 Address tcp://127.0.0.1:43041 Status: Status.closing
-2022-08-26 14:09:02,043 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:35897
-2022-08-26 14:09:02,043 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:43041', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:02,043 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:43041
-2022-08-26 14:09:02,043 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:09:02,176 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:02,176 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:02,382 - distributed.utils_perf - WARNING - full garbage collections took 84% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_broadcast_on_error 2022-08-26 14:09:02,387 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:02,389 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:02,389 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:34735
-2022-08-26 14:09:02,389 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:45573
-2022-08-26 14:09:02,394 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45981
-2022-08-26 14:09:02,394 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45981
-2022-08-26 14:09:02,394 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:09:02,394 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45103
-2022-08-26 14:09:02,394 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34735
-2022-08-26 14:09:02,394 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:02,394 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:02,394 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:02,394 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-xcev0lzd
-2022-08-26 14:09:02,394 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:02,395 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:46397
-2022-08-26 14:09:02,395 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:46397
-2022-08-26 14:09:02,395 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:09:02,395 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33377
-2022-08-26 14:09:02,395 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34735
-2022-08-26 14:09:02,395 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:02,395 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:09:02,395 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:02,395 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-bezrq9tl
-2022-08-26 14:09:02,395 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:02,398 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45981', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:02,398 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45981
-2022-08-26 14:09:02,398 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:02,399 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:46397', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:02,399 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:46397
-2022-08-26 14:09:02,399 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:02,399 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34735
-2022-08-26 14:09:02,399 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:02,399 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34735
-2022-08-26 14:09:02,399 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:02,400 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:02,400 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:02,611 - distributed.scheduler - ERROR - broadcast to tcp://127.0.0.1:45981 failed: OSError: Timed out trying to connect to tcp://127.0.0.1:45981 after 0.2 s
-2022-08-26 14:09:02,811 - distributed.scheduler - ERROR - broadcast to tcp://127.0.0.1:45981 failed: OSError: Timed out trying to connect to tcp://127.0.0.1:45981 after 0.2 s
-2022-08-26 14:09:03,013 - distributed.scheduler - ERROR - broadcast to tcp://127.0.0.1:45981 failed: OSError: Timed out trying to connect to tcp://127.0.0.1:45981 after 0.2 s
-2022-08-26 14:09:03,214 - distributed.scheduler - ERROR - broadcast to tcp://127.0.0.1:45981 failed: OSError: Timed out trying to connect to tcp://127.0.0.1:45981 after 0.2 s
-2022-08-26 14:09:03,415 - distributed.scheduler - ERROR - broadcast to tcp://127.0.0.1:45981 failed: OSError: Timed out trying to connect to tcp://127.0.0.1:45981 after 0.2 s
-2022-08-26 14:09:03,415 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:45981
-2022-08-26 14:09:03,416 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:46397
-2022-08-26 14:09:03,417 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:45981', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:03,417 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:45981
-2022-08-26 14:09:03,417 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:46397', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:03,417 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:46397
-2022-08-26 14:09:03,417 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:09:03,417 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-f361f3c1-999b-4db3-9f44-59ee76e8dfb4 Address tcp://127.0.0.1:45981 Status: Status.closing
-2022-08-26 14:09:03,418 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-48f7af76-b21c-4baf-8a38-930acfe361a9 Address tcp://127.0.0.1:46397 Status: Status.closing
-2022-08-26 14:09:03,418 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:03,419 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:03,624 - distributed.utils_perf - WARNING - full garbage collections took 84% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_broadcast_deprecation 2022-08-26 14:09:03,630 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:03,631 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:03,631 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:35461
-2022-08-26 14:09:03,631 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:40107
-2022-08-26 14:09:03,636 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38605
-2022-08-26 14:09:03,636 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38605
-2022-08-26 14:09:03,636 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:09:03,636 - distributed.worker - INFO -          dashboard at:            127.0.0.1:38723
-2022-08-26 14:09:03,636 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35461
-2022-08-26 14:09:03,636 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:03,636 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:03,636 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:03,636 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-_fk4ml7k
-2022-08-26 14:09:03,636 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:03,637 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42843
-2022-08-26 14:09:03,637 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42843
-2022-08-26 14:09:03,637 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:09:03,637 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42345
-2022-08-26 14:09:03,637 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35461
-2022-08-26 14:09:03,637 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:03,637 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:09:03,637 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:03,637 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-q372rbsj
-2022-08-26 14:09:03,637 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:03,640 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38605', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:03,640 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38605
-2022-08-26 14:09:03,641 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:03,641 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42843', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:03,641 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42843
-2022-08-26 14:09:03,641 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:03,642 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35461
-2022-08-26 14:09:03,642 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:03,642 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35461
-2022-08-26 14:09:03,642 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:03,642 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:03,642 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:03,656 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:38605
-2022-08-26 14:09:03,656 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42843
-2022-08-26 14:09:03,657 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:38605', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:03,657 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:38605
-2022-08-26 14:09:03,657 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:42843', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:03,657 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:42843
-2022-08-26 14:09:03,657 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:09:03,657 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-40a7ff8a-8e5b-4b36-8bb8-e6df98418ad5 Address tcp://127.0.0.1:38605 Status: Status.closing
-2022-08-26 14:09:03,658 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-07312f69-83c7-48d5-96a7-0fee01aeffac Address tcp://127.0.0.1:42843 Status: Status.closing
-2022-08-26 14:09:03,658 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:03,659 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:03,864 - distributed.utils_perf - WARNING - full garbage collections took 84% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_worker_name 2022-08-26 14:09:03,870 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:03,871 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:03,871 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:44071
-2022-08-26 14:09:03,871 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:37307
-2022-08-26 14:09:03,874 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45521
-2022-08-26 14:09:03,874 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45521
-2022-08-26 14:09:03,874 - distributed.worker - INFO -           Worker name:                      alice
-2022-08-26 14:09:03,874 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41313
-2022-08-26 14:09:03,874 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44071
-2022-08-26 14:09:03,874 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:03,874 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:09:03,875 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:03,875 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-bdujqtfi
-2022-08-26 14:09:03,875 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:03,876 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45521', name: alice, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:03,877 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45521
-2022-08-26 14:09:03,877 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:03,877 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44071
-2022-08-26 14:09:03,877 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:03,880 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:41081
-2022-08-26 14:09:03,880 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:41081
-2022-08-26 14:09:03,880 - distributed.worker - INFO -           Worker name:                      alice
-2022-08-26 14:09:03,880 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40183
-2022-08-26 14:09:03,880 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44071
-2022-08-26 14:09:03,880 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:03,880 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:09:03,880 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:03,880 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-nw3v2vzj
-2022-08-26 14:09:03,880 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:03,880 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:03,882 - distributed.scheduler - WARNING - Worker tried to connect with a duplicate name: alice
-2022-08-26 14:09:03,882 - distributed.worker - ERROR - Unable to connect to scheduler: name taken, alice
-2022-08-26 14:09:03,882 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:41081
-2022-08-26 14:09:03,883 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:45521
-2022-08-26 14:09:03,884 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:45521', name: alice, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:03,884 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:45521
-2022-08-26 14:09:03,884 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:09:03,884 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-0ab70f37-f256-4efc-83fa-8c08f8bb9040 Address tcp://127.0.0.1:45521 Status: Status.closing
-2022-08-26 14:09:03,885 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:03,885 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:04,094 - distributed.utils_perf - WARNING - full garbage collections took 85% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_coerce_address 2022-08-26 14:09:04,099 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:04,101 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:04,101 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:36527
-2022-08-26 14:09:04,101 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:44025
-2022-08-26 14:09:04,107 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:46815
-2022-08-26 14:09:04,107 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:46815
-2022-08-26 14:09:04,107 - distributed.worker - INFO -           Worker name:                      alice
-2022-08-26 14:09:04,107 - distributed.worker - INFO -          dashboard at:            127.0.0.1:38147
-2022-08-26 14:09:04,107 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36527
-2022-08-26 14:09:04,108 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:04,108 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:09:04,108 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:04,108 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-vwfqb7_3
-2022-08-26 14:09:04,108 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:04,108 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36131
-2022-08-26 14:09:04,108 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36131
-2022-08-26 14:09:04,108 - distributed.worker - INFO -           Worker name:                        123
-2022-08-26 14:09:04,108 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43943
-2022-08-26 14:09:04,109 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36527
-2022-08-26 14:09:04,109 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:04,109 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:09:04,109 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:04,109 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-m3uo35z7
-2022-08-26 14:09:04,109 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:04,109 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:46505
-2022-08-26 14:09:04,109 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:46505
-2022-08-26 14:09:04,109 - distributed.worker - INFO -           Worker name:                    charlie
-2022-08-26 14:09:04,110 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45965
-2022-08-26 14:09:04,110 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36527
-2022-08-26 14:09:04,110 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:04,110 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:09:04,110 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:04,110 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-kcawwvp7
-2022-08-26 14:09:04,110 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:04,114 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:46815', name: alice, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:04,114 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:46815
-2022-08-26 14:09:04,114 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:04,114 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36131', name: 123, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:04,115 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36131
-2022-08-26 14:09:04,115 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:04,115 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:46505', name: charlie, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:04,115 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:46505
-2022-08-26 14:09:04,115 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:04,116 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36527
-2022-08-26 14:09:04,116 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:04,116 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36527
-2022-08-26 14:09:04,116 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:04,116 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36527
-2022-08-26 14:09:04,116 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:04,117 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:04,117 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:04,117 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:04,117 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:46815
-2022-08-26 14:09:04,118 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36131
-2022-08-26 14:09:04,118 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:46505
-2022-08-26 14:09:04,119 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-2c88b6a3-574b-4f9d-9b07-abaf5eaebe2a Address tcp://127.0.0.1:46815 Status: Status.closing
-2022-08-26 14:09:04,119 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-0344e691-acaa-40bd-a91a-2ff32b3562d5 Address tcp://127.0.0.1:36131 Status: Status.closing
-2022-08-26 14:09:04,120 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-72d3213f-9f60-4026-9c9d-da7c88615633 Address tcp://127.0.0.1:46505 Status: Status.closing
-2022-08-26 14:09:04,120 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:46815', name: alice, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:04,120 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:46815
-2022-08-26 14:09:04,120 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:36131', name: 123, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:04,121 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:36131
-2022-08-26 14:09:04,121 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:46505', name: charlie, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:04,121 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:46505
-2022-08-26 14:09:04,121 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:09:04,122 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:04,122 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:04,327 - distributed.utils_perf - WARNING - full garbage collections took 85% CPU time recently (threshold: 10%)
-scheduler: tcp://127.0.0.1:36527 tcp://127.0.0.1:36527
-PASSED
-distributed/tests/test_scheduler.py::test_config_stealing 2022-08-26 14:09:04,333 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:04,335 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:04,335 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:39561
-2022-08-26 14:09:04,335 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:34037
-2022-08-26 14:09:04,335 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:04,336 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:04,540 - distributed.utils_perf - WARNING - full garbage collections took 85% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_config_no_stealing 2022-08-26 14:09:04,545 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:04,547 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:04,547 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:44449
-2022-08-26 14:09:04,547 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:41187
-2022-08-26 14:09:04,547 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:04,547 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:04,751 - distributed.utils_perf - WARNING - full garbage collections took 85% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_file_descriptors_dont_leak 2022-08-26 14:09:04,757 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:04,758 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:04,759 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:45103
-2022-08-26 14:09:04,759 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:43705
-2022-08-26 14:09:04,761 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:41175
-2022-08-26 14:09:04,762 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:41175
-2022-08-26 14:09:04,762 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42353
-2022-08-26 14:09:04,762 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:45103
-2022-08-26 14:09:04,762 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:04,762 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:09:04,762 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:04,762 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-zqdaw2uo
-2022-08-26 14:09:04,762 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:04,764 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:41175', status: init, memory: 0, processing: 0>
-2022-08-26 14:09:04,764 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:41175
-2022-08-26 14:09:04,764 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:04,764 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:45103
-2022-08-26 14:09:04,764 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:04,765 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:41175
-2022-08-26 14:09:04,765 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:04,765 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-630492bb-d315-4c0c-999c-c0f06a0d9060 Address tcp://127.0.0.1:41175 Status: Status.closing
-2022-08-26 14:09:04,766 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:41175', status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:04,766 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:41175
-2022-08-26 14:09:04,766 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:09:04,766 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:04,767 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:04,971 - distributed.utils_perf - WARNING - full garbage collections took 85% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_update_graph_culls 2022-08-26 14:09:04,977 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:04,978 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:04,979 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:40155
-2022-08-26 14:09:04,979 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:40879
-2022-08-26 14:09:04,983 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38651
-2022-08-26 14:09:04,983 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38651
-2022-08-26 14:09:04,983 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:09:04,983 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34699
-2022-08-26 14:09:04,983 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:40155
-2022-08-26 14:09:04,983 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:04,983 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:04,983 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:04,983 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-i__dqmgb
-2022-08-26 14:09:04,984 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:04,984 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42689
-2022-08-26 14:09:04,984 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42689
-2022-08-26 14:09:04,984 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:09:04,984 - distributed.worker - INFO -          dashboard at:            127.0.0.1:38009
-2022-08-26 14:09:04,984 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:40155
-2022-08-26 14:09:04,984 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:04,984 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:09:04,984 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:04,984 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-tuheymrg
-2022-08-26 14:09:04,985 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:04,987 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38651', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:04,988 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38651
-2022-08-26 14:09:04,988 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:04,988 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42689', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:04,988 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42689
-2022-08-26 14:09:04,988 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:04,989 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:40155
-2022-08-26 14:09:04,989 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:04,989 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:40155
-2022-08-26 14:09:04,989 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:04,989 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:04,989 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:05,001 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:38651
-2022-08-26 14:09:05,001 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42689
-2022-08-26 14:09:05,002 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:38651', name: 0, status: closing, memory: 0, processing: 1>
-2022-08-26 14:09:05,002 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:38651
-2022-08-26 14:09:05,003 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:42689', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:05,003 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:42689
-2022-08-26 14:09:05,003 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:09:05,003 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-0508925e-95c7-4533-b4d1-488f7c2e0635 Address tcp://127.0.0.1:38651 Status: Status.closing
-2022-08-26 14:09:05,003 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-c5c1467b-a2b2-4b12-a91e-1da4a0c7bf03 Address tcp://127.0.0.1:42689 Status: Status.closing
-2022-08-26 14:09:05,004 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:05,004 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:05,209 - distributed.utils_perf - WARNING - full garbage collections took 85% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_io_loop 2022-08-26 14:09:05,235 - distributed.scheduler - INFO - State start
-PASSED
-distributed/tests/test_scheduler.py::test_story 2022-08-26 14:09:05,242 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:05,243 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:05,244 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:38263
-2022-08-26 14:09:05,244 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:38127
-2022-08-26 14:09:05,248 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:32965
-2022-08-26 14:09:05,248 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:32965
-2022-08-26 14:09:05,248 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:09:05,248 - distributed.worker - INFO -          dashboard at:            127.0.0.1:44015
-2022-08-26 14:09:05,248 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:38263
-2022-08-26 14:09:05,248 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:05,248 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:05,248 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:05,248 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-j4wtmm23
-2022-08-26 14:09:05,249 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:05,249 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37671
-2022-08-26 14:09:05,249 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37671
-2022-08-26 14:09:05,249 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:09:05,249 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33427
-2022-08-26 14:09:05,249 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:38263
-2022-08-26 14:09:05,249 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:05,249 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:09:05,249 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:05,249 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-gk8j0oxv
-2022-08-26 14:09:05,250 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:05,252 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:32965', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:05,253 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:32965
-2022-08-26 14:09:05,253 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:05,253 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37671', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:05,253 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37671
-2022-08-26 14:09:05,253 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:05,254 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:38263
-2022-08-26 14:09:05,254 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:05,254 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:38263
-2022-08-26 14:09:05,254 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:05,254 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:05,254 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:05,268 - distributed.scheduler - INFO - Receive client connection: Client-51433c3a-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:05,268 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:05,290 - distributed.scheduler - INFO - Remove client Client-51433c3a-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:05,290 - distributed.scheduler - INFO - Remove client Client-51433c3a-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:05,290 - distributed.scheduler - INFO - Close client connection: Client-51433c3a-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:05,291 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:32965
-2022-08-26 14:09:05,291 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:37671
-2022-08-26 14:09:05,292 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:32965', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:05,292 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:32965
-2022-08-26 14:09:05,292 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:37671', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:05,292 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:37671
-2022-08-26 14:09:05,292 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:09:05,293 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-c309efab-3998-4c9f-9463-ce50ac016187 Address tcp://127.0.0.1:32965 Status: Status.closing
-2022-08-26 14:09:05,293 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-ae5693ed-3e14-40b8-a7db-ad5766913446 Address tcp://127.0.0.1:37671 Status: Status.closing
-2022-08-26 14:09:05,294 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:05,294 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:05,499 - distributed.utils_perf - WARNING - full garbage collections took 84% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_scatter_no_workers[False] 2022-08-26 14:09:05,505 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:05,507 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:05,507 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:38269
-2022-08-26 14:09:05,507 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:35487
-2022-08-26 14:09:05,510 - distributed.scheduler - INFO - Receive client connection: Client-51682fb9-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:05,511 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:05,713 - distributed.core - ERROR - Exception while handling op scatter
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 770, in _handle_comm
-    result = await result
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/scheduler.py", line 5075, in scatter
-    raise TimeoutError("No valid workers found")
-asyncio.exceptions.TimeoutError: No valid workers found
-2022-08-26 14:09:05,818 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37037
-2022-08-26 14:09:05,818 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37037
-2022-08-26 14:09:05,818 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40857
-2022-08-26 14:09:05,818 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:38269
-2022-08-26 14:09:05,818 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:05,818 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:09:05,818 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:05,818 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-gxs1opz3
-2022-08-26 14:09:05,818 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:05,820 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37037', status: init, memory: 0, processing: 0>
-2022-08-26 14:09:05,821 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37037
-2022-08-26 14:09:05,821 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:05,821 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:38269
-2022-08-26 14:09:05,821 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:05,822 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:05,925 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:37037
-2022-08-26 14:09:05,926 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:37037', status: closing, memory: 1, processing: 0>
-2022-08-26 14:09:05,926 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:37037
-2022-08-26 14:09:05,926 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:09:05,927 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-2b1d110d-5686-4c74-8383-9dd3f899ecaf Address tcp://127.0.0.1:37037 Status: Status.closing
-2022-08-26 14:09:05,931 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40377
-2022-08-26 14:09:05,931 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40377
-2022-08-26 14:09:05,931 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45419
-2022-08-26 14:09:05,931 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:38269
-2022-08-26 14:09:05,931 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:05,931 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:09:05,931 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:05,931 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-k0oxvthq
-2022-08-26 14:09:05,931 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:05,933 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40377', status: init, memory: 0, processing: 0>
-2022-08-26 14:09:05,933 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40377
-2022-08-26 14:09:05,933 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:05,933 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:38269
-2022-08-26 14:09:05,933 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:05,934 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:06,035 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:40377
-2022-08-26 14:09:06,036 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:40377', status: closing, memory: 1, processing: 0>
-2022-08-26 14:09:06,036 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:40377
-2022-08-26 14:09:06,036 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:09:06,037 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-f4ea79da-7119-41fe-92be-662fccae8786 Address tcp://127.0.0.1:40377 Status: Status.closing
-2022-08-26 14:09:06,046 - distributed.scheduler - INFO - Remove client Client-51682fb9-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:06,046 - distributed.scheduler - INFO - Remove client Client-51682fb9-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:06,047 - distributed.scheduler - INFO - Close client connection: Client-51682fb9-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:06,047 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:06,047 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:06,253 - distributed.utils_perf - WARNING - full garbage collections took 84% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_scatter_no_workers[True] 2022-08-26 14:09:06,259 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:06,261 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:06,261 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:42663
-2022-08-26 14:09:06,261 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:40729
-2022-08-26 14:09:06,264 - distributed.scheduler - INFO - Receive client connection: Client-51db3216-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:06,264 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:06,570 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33533
-2022-08-26 14:09:06,570 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33533
-2022-08-26 14:09:06,570 - distributed.worker - INFO -          dashboard at:            127.0.0.1:37985
-2022-08-26 14:09:06,570 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:42663
-2022-08-26 14:09:06,570 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:06,570 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:09:06,570 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:06,570 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-2gwr5cqk
-2022-08-26 14:09:06,570 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:06,572 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33533', status: init, memory: 0, processing: 0>
-2022-08-26 14:09:06,572 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33533
-2022-08-26 14:09:06,572 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:06,573 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:42663
-2022-08-26 14:09:06,573 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:06,573 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:06,678 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:33533
-2022-08-26 14:09:06,679 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:33533', status: closing, memory: 1, processing: 0>
-2022-08-26 14:09:06,679 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:33533
-2022-08-26 14:09:06,679 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:09:06,679 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-eb7f0f80-67ce-4924-b8a1-a2c006452162 Address tcp://127.0.0.1:33533 Status: Status.closing
-2022-08-26 14:09:06,683 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43285
-2022-08-26 14:09:06,683 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43285
-2022-08-26 14:09:06,683 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42965
-2022-08-26 14:09:06,683 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:42663
-2022-08-26 14:09:06,683 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:06,683 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:09:06,684 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:06,684 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-9dcvi4rn
-2022-08-26 14:09:06,684 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:06,686 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:43285', status: init, memory: 0, processing: 0>
-2022-08-26 14:09:06,686 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:43285
-2022-08-26 14:09:06,686 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:06,686 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:42663
-2022-08-26 14:09:06,686 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:06,687 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:06,788 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:43285
-2022-08-26 14:09:06,789 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:43285', status: closing, memory: 1, processing: 0>
-2022-08-26 14:09:06,789 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:43285
-2022-08-26 14:09:06,790 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:09:06,790 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-c5c7ef9b-92ff-45fd-927e-60c94cf3ea1c Address tcp://127.0.0.1:43285 Status: Status.closing
-2022-08-26 14:09:06,799 - distributed.scheduler - INFO - Remove client Client-51db3216-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:06,799 - distributed.scheduler - INFO - Remove client Client-51db3216-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:06,800 - distributed.scheduler - INFO - Close client connection: Client-51db3216-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:06,800 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:06,800 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:07,006 - distributed.utils_perf - WARNING - full garbage collections took 84% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_scheduler_sees_memory_limits 2022-08-26 14:09:07,011 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:07,013 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:07,013 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:35833
-2022-08-26 14:09:07,013 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:42897
-2022-08-26 14:09:07,016 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36755
-2022-08-26 14:09:07,016 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36755
-2022-08-26 14:09:07,016 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42509
-2022-08-26 14:09:07,016 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35833
-2022-08-26 14:09:07,016 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:07,016 - distributed.worker - INFO -               Threads:                          3
-2022-08-26 14:09:07,016 - distributed.worker - INFO -                Memory:                  12.06 kiB
-2022-08-26 14:09:07,016 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-hu0anut_
-2022-08-26 14:09:07,016 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:07,018 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36755', status: init, memory: 0, processing: 0>
-2022-08-26 14:09:07,018 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36755
-2022-08-26 14:09:07,018 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:07,018 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35833
-2022-08-26 14:09:07,019 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:07,019 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36755
-2022-08-26 14:09:07,019 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:07,019 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-2505a05c-0223-41ef-9e1a-df44d241b11c Address tcp://127.0.0.1:36755 Status: Status.closing
-2022-08-26 14:09:07,020 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:36755', status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:07,020 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:36755
-2022-08-26 14:09:07,020 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:09:07,020 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:07,021 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:07,225 - distributed.utils_perf - WARNING - full garbage collections took 84% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_retire_workers 2022-08-26 14:09:07,231 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:07,232 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:07,232 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:35669
-2022-08-26 14:09:07,232 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:36687
-2022-08-26 14:09:07,237 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43371
-2022-08-26 14:09:07,237 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43371
-2022-08-26 14:09:07,237 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:09:07,237 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35519
-2022-08-26 14:09:07,237 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35669
-2022-08-26 14:09:07,237 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:07,237 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:07,237 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:07,237 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-t6buabzp
-2022-08-26 14:09:07,237 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:07,238 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:46809
-2022-08-26 14:09:07,238 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:46809
-2022-08-26 14:09:07,238 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:09:07,238 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35605
-2022-08-26 14:09:07,238 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35669
-2022-08-26 14:09:07,238 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:07,238 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:09:07,238 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:07,238 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-qzndrqko
-2022-08-26 14:09:07,238 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:07,241 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:43371', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:07,241 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:43371
-2022-08-26 14:09:07,241 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:07,241 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:46809', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:07,242 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:46809
-2022-08-26 14:09:07,242 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:07,242 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35669
-2022-08-26 14:09:07,242 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:07,242 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35669
-2022-08-26 14:09:07,242 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:07,243 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:07,243 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:07,256 - distributed.scheduler - INFO - Receive client connection: Client-52729aa2-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:07,256 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:07,285 - distributed.scheduler - INFO - Retiring worker tcp://127.0.0.1:43371
-2022-08-26 14:09:07,286 - distributed.active_memory_manager - INFO - Retiring worker tcp://127.0.0.1:43371; 1 keys are being moved away.
-2022-08-26 14:09:07,296 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:43371', name: 0, status: closing_gracefully, memory: 1, processing: 0>
-2022-08-26 14:09:07,296 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:43371
-2022-08-26 14:09:07,296 - distributed.scheduler - INFO - Retired worker tcp://127.0.0.1:43371
-2022-08-26 14:09:07,307 - distributed.scheduler - INFO - Remove client Client-52729aa2-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:07,307 - distributed.scheduler - INFO - Remove client Client-52729aa2-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:07,308 - distributed.scheduler - INFO - Close client connection: Client-52729aa2-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:07,308 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:43371
-2022-08-26 14:09:07,308 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:46809
-2022-08-26 14:09:07,309 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:46809', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:07,309 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:46809
-2022-08-26 14:09:07,309 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:09:07,310 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-b548b91b-05a8-446c-95e2-f3c2eb978bc2 Address tcp://127.0.0.1:43371 Status: Status.closing
-2022-08-26 14:09:07,310 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-c0f4e0fa-54e5-4369-8b79-8f4bef0dc448 Address tcp://127.0.0.1:46809 Status: Status.closing
-2022-08-26 14:09:07,311 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:07,311 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:07,517 - distributed.utils_perf - WARNING - full garbage collections took 84% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_retire_workers_n 2022-08-26 14:09:07,523 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:07,525 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:07,525 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:35337
-2022-08-26 14:09:07,525 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:36549
-2022-08-26 14:09:07,530 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45019
-2022-08-26 14:09:07,530 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45019
-2022-08-26 14:09:07,530 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:09:07,530 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34971
-2022-08-26 14:09:07,530 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35337
-2022-08-26 14:09:07,530 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:07,530 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:07,530 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:07,530 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-k6hrptzo
-2022-08-26 14:09:07,530 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:07,531 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35067
-2022-08-26 14:09:07,531 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35067
-2022-08-26 14:09:07,531 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:09:07,531 - distributed.worker - INFO -          dashboard at:            127.0.0.1:37731
-2022-08-26 14:09:07,531 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35337
-2022-08-26 14:09:07,531 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:07,531 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:09:07,531 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:07,531 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-owg8opvu
-2022-08-26 14:09:07,531 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:07,534 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45019', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:07,534 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45019
-2022-08-26 14:09:07,534 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:07,535 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:35067', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:07,535 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:35067
-2022-08-26 14:09:07,535 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:07,535 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35337
-2022-08-26 14:09:07,535 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:07,536 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35337
-2022-08-26 14:09:07,536 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:07,536 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:07,536 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:07,550 - distributed.scheduler - INFO - Receive client connection: Client-529f62ed-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:07,550 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:07,551 - distributed.scheduler - INFO - Retiring worker tcp://127.0.0.1:45019
-2022-08-26 14:09:07,551 - distributed.active_memory_manager - INFO - Retiring worker tcp://127.0.0.1:45019; no unique keys need to be moved away.
-2022-08-26 14:09:07,551 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:45019', name: 0, status: closing_gracefully, memory: 0, processing: 0>
-2022-08-26 14:09:07,551 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:45019
-2022-08-26 14:09:07,551 - distributed.scheduler - INFO - Retired worker tcp://127.0.0.1:45019
-2022-08-26 14:09:07,551 - distributed.scheduler - INFO - Retiring worker tcp://127.0.0.1:35067
-2022-08-26 14:09:07,552 - distributed.active_memory_manager - INFO - Retiring worker tcp://127.0.0.1:35067; no unique keys need to be moved away.
-2022-08-26 14:09:07,552 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:35067', name: 1, status: closing_gracefully, memory: 0, processing: 0>
-2022-08-26 14:09:07,552 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:35067
-2022-08-26 14:09:07,552 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:09:07,552 - distributed.scheduler - INFO - Retired worker tcp://127.0.0.1:35067
-2022-08-26 14:09:07,557 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:45019
-2022-08-26 14:09:07,557 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-1c48d97c-ef01-4076-9316-a1a6e85731ae Address tcp://127.0.0.1:45019 Status: Status.closing
-2022-08-26 14:09:07,558 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:35067
-2022-08-26 14:09:07,559 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-cb850fc1-34b0-4892-81e7-abab40643f30 Address tcp://127.0.0.1:35067 Status: Status.closing
-2022-08-26 14:09:07,564 - distributed.scheduler - INFO - Remove client Client-529f62ed-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:07,564 - distributed.scheduler - INFO - Remove client Client-529f62ed-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:07,564 - distributed.scheduler - INFO - Close client connection: Client-529f62ed-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:07,565 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:07,565 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:07,772 - distributed.utils_perf - WARNING - full garbage collections took 84% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_workers_to_close 2022-08-26 14:09:07,778 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:07,780 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:07,780 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:45983
-2022-08-26 14:09:07,780 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:41255
-2022-08-26 14:09:07,788 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33313
-2022-08-26 14:09:07,788 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33313
-2022-08-26 14:09:07,788 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:09:07,788 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33783
-2022-08-26 14:09:07,788 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:45983
-2022-08-26 14:09:07,788 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:07,788 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:07,788 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:07,788 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-z6897dof
-2022-08-26 14:09:07,789 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:07,789 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:46309
-2022-08-26 14:09:07,789 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:46309
-2022-08-26 14:09:07,789 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:09:07,789 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35677
-2022-08-26 14:09:07,789 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:45983
-2022-08-26 14:09:07,789 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:07,789 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:07,789 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:07,790 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-emuzvlb5
-2022-08-26 14:09:07,790 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:07,790 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38255
-2022-08-26 14:09:07,790 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38255
-2022-08-26 14:09:07,790 - distributed.worker - INFO -           Worker name:                          2
-2022-08-26 14:09:07,790 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45699
-2022-08-26 14:09:07,790 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:45983
-2022-08-26 14:09:07,790 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:07,790 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:07,790 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:07,791 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-pdau8kno
-2022-08-26 14:09:07,791 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:07,791 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36767
-2022-08-26 14:09:07,791 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36767
-2022-08-26 14:09:07,791 - distributed.worker - INFO -           Worker name:                          3
-2022-08-26 14:09:07,791 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34895
-2022-08-26 14:09:07,791 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:45983
-2022-08-26 14:09:07,791 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:07,791 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:07,792 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:07,792 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-gtjy5zbi
-2022-08-26 14:09:07,792 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:07,796 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33313', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:07,797 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33313
-2022-08-26 14:09:07,797 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:07,797 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:46309', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:07,797 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:46309
-2022-08-26 14:09:07,797 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:07,798 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38255', name: 2, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:07,798 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38255
-2022-08-26 14:09:07,798 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:07,798 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36767', name: 3, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:07,799 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36767
-2022-08-26 14:09:07,799 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:07,799 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:45983
-2022-08-26 14:09:07,799 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:07,799 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:45983
-2022-08-26 14:09:07,799 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:07,800 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:45983
-2022-08-26 14:09:07,800 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:07,800 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:45983
-2022-08-26 14:09:07,800 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:07,800 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:07,800 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:07,801 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:07,801 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:07,815 - distributed.scheduler - INFO - Receive client connection: Client-52c7d070-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:07,815 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:07,838 - distributed.scheduler - INFO - Remove client Client-52c7d070-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:07,838 - distributed.scheduler - INFO - Remove client Client-52c7d070-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:07,838 - distributed.scheduler - INFO - Close client connection: Client-52c7d070-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:07,839 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:33313
-2022-08-26 14:09:07,839 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:46309
-2022-08-26 14:09:07,840 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:38255
-2022-08-26 14:09:07,840 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36767
-2022-08-26 14:09:07,841 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:46309', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:07,842 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:46309
-2022-08-26 14:09:07,842 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-a1b074ec-bee1-4b87-9989-154253782ceb Address tcp://127.0.0.1:46309 Status: Status.closing
-2022-08-26 14:09:07,843 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-1efda3c2-f576-47fc-9e99-5c1712bcc28e Address tcp://127.0.0.1:33313 Status: Status.closing
-2022-08-26 14:09:07,843 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-b20478f3-1551-4a2a-b7c3-dbb3186a02f3 Address tcp://127.0.0.1:38255 Status: Status.closing
-2022-08-26 14:09:07,843 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-1b76d177-4173-4505-9a1f-b5c5238acd25 Address tcp://127.0.0.1:36767 Status: Status.closing
-2022-08-26 14:09:07,844 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:33313', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:07,844 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:33313
-2022-08-26 14:09:07,844 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:38255', name: 2, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:07,844 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:38255
-2022-08-26 14:09:07,844 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:36767', name: 3, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:07,844 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:36767
-2022-08-26 14:09:07,845 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:09:07,849 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:07,849 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:08,058 - distributed.utils_perf - WARNING - full garbage collections took 83% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_workers_to_close_grouped 2022-08-26 14:09:08,063 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:08,065 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:08,065 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:38841
-2022-08-26 14:09:08,065 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:46155
-2022-08-26 14:09:08,073 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33647
-2022-08-26 14:09:08,073 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33647
-2022-08-26 14:09:08,073 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:09:08,073 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41743
-2022-08-26 14:09:08,074 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:38841
-2022-08-26 14:09:08,074 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:08,074 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:08,074 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:08,074 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-nyp5efft
-2022-08-26 14:09:08,074 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:08,074 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40083
-2022-08-26 14:09:08,074 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40083
-2022-08-26 14:09:08,074 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:09:08,075 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43587
-2022-08-26 14:09:08,075 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:38841
-2022-08-26 14:09:08,075 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:08,075 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:08,075 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:08,075 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-t0u2oe8s
-2022-08-26 14:09:08,075 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:08,075 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:41145
-2022-08-26 14:09:08,075 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:41145
-2022-08-26 14:09:08,076 - distributed.worker - INFO -           Worker name:                          2
-2022-08-26 14:09:08,076 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41725
-2022-08-26 14:09:08,076 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:38841
-2022-08-26 14:09:08,076 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:08,076 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:08,076 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:08,076 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-klnv6s3j
-2022-08-26 14:09:08,076 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:08,076 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36481
-2022-08-26 14:09:08,077 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36481
-2022-08-26 14:09:08,077 - distributed.worker - INFO -           Worker name:                          3
-2022-08-26 14:09:08,077 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35689
-2022-08-26 14:09:08,077 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:38841
-2022-08-26 14:09:08,077 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:08,077 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:08,077 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:08,077 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-jtu8oqvk
-2022-08-26 14:09:08,077 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:08,082 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33647', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:08,082 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33647
-2022-08-26 14:09:08,082 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:08,083 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40083', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:08,083 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40083
-2022-08-26 14:09:08,083 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:08,083 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:41145', name: 2, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:08,084 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:41145
-2022-08-26 14:09:08,084 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:08,084 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36481', name: 3, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:08,084 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36481
-2022-08-26 14:09:08,084 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:08,085 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:38841
-2022-08-26 14:09:08,085 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:08,085 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:38841
-2022-08-26 14:09:08,085 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:08,085 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:38841
-2022-08-26 14:09:08,085 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:08,086 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:38841
-2022-08-26 14:09:08,086 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:08,086 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:08,086 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:08,086 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:08,086 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:08,100 - distributed.scheduler - INFO - Receive client connection: Client-52f36573-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:08,101 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:08,134 - distributed.scheduler - INFO - Remove client Client-52f36573-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:08,134 - distributed.scheduler - INFO - Remove client Client-52f36573-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:08,134 - distributed.scheduler - INFO - Close client connection: Client-52f36573-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:08,136 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:33647
-2022-08-26 14:09:08,136 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:40083
-2022-08-26 14:09:08,137 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:41145
-2022-08-26 14:09:08,137 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36481
-2022-08-26 14:09:08,138 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:40083', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:08,138 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:40083
-2022-08-26 14:09:08,139 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-4a01c59a-dfb7-45f5-a122-50ed8a4b5be4 Address tcp://127.0.0.1:40083 Status: Status.closing
-2022-08-26 14:09:08,139 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:41145', name: 2, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:08,139 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:41145
-2022-08-26 14:09:08,139 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:36481', name: 3, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:08,139 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:36481
-2022-08-26 14:09:08,140 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-9b1ee6eb-1bf0-409e-8a5f-9369ab7a8355 Address tcp://127.0.0.1:41145 Status: Status.closing
-2022-08-26 14:09:08,140 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-5439a018-516c-4a3c-87de-24a96bc03288 Address tcp://127.0.0.1:36481 Status: Status.closing
-2022-08-26 14:09:08,141 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-3ff73644-0247-436e-84ad-a188c3d91fd5 Address tcp://127.0.0.1:33647 Status: Status.closing
-2022-08-26 14:09:08,141 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:33647', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:08,141 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:33647
-2022-08-26 14:09:08,141 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:09:08,313 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:08,313 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:08,522 - distributed.utils_perf - WARNING - full garbage collections took 83% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_retire_workers_no_suspicious_tasks 2022-08-26 14:09:08,528 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:08,530 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:08,530 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:39475
-2022-08-26 14:09:08,530 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:39477
-2022-08-26 14:09:08,534 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35057
-2022-08-26 14:09:08,534 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35057
-2022-08-26 14:09:08,534 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:09:08,534 - distributed.worker - INFO -          dashboard at:            127.0.0.1:37481
-2022-08-26 14:09:08,535 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39475
-2022-08-26 14:09:08,535 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:08,535 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:08,535 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:08,535 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-_u59t8do
-2022-08-26 14:09:08,535 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:08,535 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38531
-2022-08-26 14:09:08,535 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38531
-2022-08-26 14:09:08,535 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:09:08,536 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34333
-2022-08-26 14:09:08,536 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39475
-2022-08-26 14:09:08,536 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:08,536 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:09:08,536 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:08,536 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-xg3f7gfo
-2022-08-26 14:09:08,536 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:08,539 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:35057', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:08,539 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:35057
-2022-08-26 14:09:08,539 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:08,540 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38531', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:08,540 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38531
-2022-08-26 14:09:08,540 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:08,540 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39475
-2022-08-26 14:09:08,540 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:08,540 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39475
-2022-08-26 14:09:08,540 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:08,541 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:08,541 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:08,554 - distributed.scheduler - INFO - Receive client connection: Client-5338b21e-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:08,555 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:08,757 - distributed.scheduler - INFO - Retiring worker tcp://127.0.0.1:35057
-2022-08-26 14:09:08,757 - distributed.active_memory_manager - INFO - Retiring worker tcp://127.0.0.1:35057; no unique keys need to be moved away.
-2022-08-26 14:09:08,757 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:35057', name: 0, status: closing_gracefully, memory: 0, processing: 1>
-2022-08-26 14:09:08,757 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:35057
-2022-08-26 14:09:08,758 - distributed.scheduler - INFO - Retired worker tcp://127.0.0.1:35057
-2022-08-26 14:09:08,769 - distributed.scheduler - INFO - Remove client Client-5338b21e-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:08,769 - distributed.scheduler - INFO - Remove client Client-5338b21e-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:08,770 - distributed.scheduler - INFO - Close client connection: Client-5338b21e-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:08,770 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:35057
-2022-08-26 14:09:08,770 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:38531
-2022-08-26 14:09:08,772 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-5ff9268b-0c5e-42c2-b8f7-c49b3852233a Address tcp://127.0.0.1:35057 Status: Status.closing
-2022-08-26 14:09:08,772 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-22545f8f-ad05-4799-9569-d0a7b0e92369 Address tcp://127.0.0.1:38531 Status: Status.closing
-2022-08-26 14:09:08,772 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:38531', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:08,772 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:38531
-2022-08-26 14:09:08,772 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:09:09,260 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:09,261 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:09,470 - distributed.utils_perf - WARNING - full garbage collections took 84% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_file_descriptors SKIPPED (...)
-distributed/tests/test_scheduler.py::test_learn_occupancy SKIPPED (n...)
-distributed/tests/test_scheduler.py::test_learn_occupancy_2 SKIPPED
-distributed/tests/test_scheduler.py::test_occupancy_cleardown 2022-08-26 14:09:09,478 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:09,480 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:09,480 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:37369
-2022-08-26 14:09:09,480 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:42455
-2022-08-26 14:09:09,485 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35991
-2022-08-26 14:09:09,485 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35991
-2022-08-26 14:09:09,485 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:09:09,485 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40167
-2022-08-26 14:09:09,485 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37369
-2022-08-26 14:09:09,485 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:09,485 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:09,485 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:09,485 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-lr84zedw
-2022-08-26 14:09:09,485 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:09,486 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45589
-2022-08-26 14:09:09,486 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45589
-2022-08-26 14:09:09,486 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:09:09,486 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45573
-2022-08-26 14:09:09,486 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37369
-2022-08-26 14:09:09,486 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:09,486 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:09:09,486 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:09,486 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-yqz99g1l
-2022-08-26 14:09:09,486 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:09,489 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:35991', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:09,490 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:35991
-2022-08-26 14:09:09,490 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:09,490 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45589', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:09,490 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45589
-2022-08-26 14:09:09,490 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:09,491 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37369
-2022-08-26 14:09:09,491 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:09,491 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37369
-2022-08-26 14:09:09,491 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:09,491 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:09,491 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:09,505 - distributed.scheduler - INFO - Receive client connection: Client-53c9b86f-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:09,505 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:10,083 - distributed.scheduler - INFO - Remove client Client-53c9b86f-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:10,083 - distributed.scheduler - INFO - Remove client Client-53c9b86f-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:10,083 - distributed.scheduler - INFO - Close client connection: Client-53c9b86f-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:10,084 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:35991
-2022-08-26 14:09:10,084 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:45589
-2022-08-26 14:09:10,085 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:35991', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:10,085 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:35991
-2022-08-26 14:09:10,085 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:45589', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:10,085 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:45589
-2022-08-26 14:09:10,085 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:09:10,085 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-c3f74b5d-de0d-48e6-abeb-b1e7ee841756 Address tcp://127.0.0.1:35991 Status: Status.closing
-2022-08-26 14:09:10,086 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-f7385d8c-0a9d-416b-8695-10c9abd7b93c Address tcp://127.0.0.1:45589 Status: Status.closing
-2022-08-26 14:09:10,087 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:10,087 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:10,298 - distributed.utils_perf - WARNING - full garbage collections took 82% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_balance_many_workers 2022-08-26 14:09:10,304 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:10,306 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:10,306 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:37665
-2022-08-26 14:09:10,306 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:46339
-2022-08-26 14:09:10,363 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43179
-2022-08-26 14:09:10,363 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43179
-2022-08-26 14:09:10,363 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:09:10,363 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35551
-2022-08-26 14:09:10,363 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37665
-2022-08-26 14:09:10,363 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:10,363 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:10,363 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:10,364 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-asru8a1a
-2022-08-26 14:09:10,364 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:10,364 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36075
-2022-08-26 14:09:10,364 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36075
-2022-08-26 14:09:10,364 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:09:10,364 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46847
-2022-08-26 14:09:10,365 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37665
-2022-08-26 14:09:10,365 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:10,365 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:10,365 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:10,365 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-aklu3kft
-2022-08-26 14:09:10,365 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:10,366 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42317
-2022-08-26 14:09:10,366 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42317
-2022-08-26 14:09:10,366 - distributed.worker - INFO -           Worker name:                          2
-2022-08-26 14:09:10,366 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40123
-2022-08-26 14:09:10,366 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37665
-2022-08-26 14:09:10,366 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:10,366 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:10,366 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:10,366 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-c7wmekmn
-2022-08-26 14:09:10,366 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:10,367 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:32959
-2022-08-26 14:09:10,367 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:32959
-2022-08-26 14:09:10,367 - distributed.worker - INFO -           Worker name:                          3
-2022-08-26 14:09:10,367 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34061
-2022-08-26 14:09:10,367 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37665
-2022-08-26 14:09:10,367 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:10,367 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:10,367 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:10,368 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-91p7hnv5
-2022-08-26 14:09:10,368 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:10,368 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40107
-2022-08-26 14:09:10,368 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40107
-2022-08-26 14:09:10,368 - distributed.worker - INFO -           Worker name:                          4
-2022-08-26 14:09:10,368 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40757
-2022-08-26 14:09:10,369 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37665
-2022-08-26 14:09:10,369 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:10,369 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:10,369 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:10,369 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-goo_l_ie
-2022-08-26 14:09:10,369 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:10,369 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37967
-2022-08-26 14:09:10,370 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37967
-2022-08-26 14:09:10,370 - distributed.worker - INFO -           Worker name:                          5
-2022-08-26 14:09:10,370 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46369
-2022-08-26 14:09:10,370 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37665
-2022-08-26 14:09:10,370 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:10,370 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:10,370 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:10,370 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-130om1e0
-2022-08-26 14:09:10,370 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:10,371 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33877
-2022-08-26 14:09:10,371 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33877
-2022-08-26 14:09:10,371 - distributed.worker - INFO -           Worker name:                          6
-2022-08-26 14:09:10,371 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36653
-2022-08-26 14:09:10,371 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37665
-2022-08-26 14:09:10,371 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:10,371 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:10,371 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:10,371 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-qfssmu_k
-2022-08-26 14:09:10,372 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:10,372 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37513
-2022-08-26 14:09:10,372 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37513
-2022-08-26 14:09:10,372 - distributed.worker - INFO -           Worker name:                          7
-2022-08-26 14:09:10,372 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34903
-2022-08-26 14:09:10,372 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37665
-2022-08-26 14:09:10,372 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:10,373 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:10,373 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:10,373 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-3u3c9kmj
-2022-08-26 14:09:10,373 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:10,373 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40063
-2022-08-26 14:09:10,373 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40063
-2022-08-26 14:09:10,374 - distributed.worker - INFO -           Worker name:                          8
-2022-08-26 14:09:10,374 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46583
-2022-08-26 14:09:10,374 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37665
-2022-08-26 14:09:10,374 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:10,374 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:10,374 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:10,374 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-06xn17uz
-2022-08-26 14:09:10,374 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:10,375 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37349
-2022-08-26 14:09:10,375 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37349
-2022-08-26 14:09:10,375 - distributed.worker - INFO -           Worker name:                          9
-2022-08-26 14:09:10,375 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40417
-2022-08-26 14:09:10,375 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37665
-2022-08-26 14:09:10,375 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:10,375 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:10,375 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:10,375 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-zalz2l6j
-2022-08-26 14:09:10,375 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:10,376 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37965
-2022-08-26 14:09:10,376 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37965
-2022-08-26 14:09:10,376 - distributed.worker - INFO -           Worker name:                         10
-2022-08-26 14:09:10,376 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34953
-2022-08-26 14:09:10,376 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37665
-2022-08-26 14:09:10,376 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:10,376 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:10,377 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:10,377 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-96pyi162
-2022-08-26 14:09:10,377 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:10,377 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45563
-2022-08-26 14:09:10,377 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45563
-2022-08-26 14:09:10,377 - distributed.worker - INFO -           Worker name:                         11
-2022-08-26 14:09:10,378 - distributed.worker - INFO -          dashboard at:            127.0.0.1:37355
-2022-08-26 14:09:10,378 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37665
-2022-08-26 14:09:10,378 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:10,378 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:10,378 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:10,378 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-sjqdqw38
-2022-08-26 14:09:10,378 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:10,379 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:44283
-2022-08-26 14:09:10,379 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:44283
-2022-08-26 14:09:10,379 - distributed.worker - INFO -           Worker name:                         12
-2022-08-26 14:09:10,379 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43007
-2022-08-26 14:09:10,379 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37665
-2022-08-26 14:09:10,379 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:10,379 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:10,379 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:10,379 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-z9a9y1qt
-2022-08-26 14:09:10,379 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:10,380 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:46159
-2022-08-26 14:09:10,380 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:46159
-2022-08-26 14:09:10,380 - distributed.worker - INFO -           Worker name:                         13
-2022-08-26 14:09:10,380 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34295
-2022-08-26 14:09:10,380 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37665
-2022-08-26 14:09:10,380 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:10,380 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:10,380 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:10,381 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-5fbc396f
-2022-08-26 14:09:10,381 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:10,381 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43131
-2022-08-26 14:09:10,381 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43131
-2022-08-26 14:09:10,381 - distributed.worker - INFO -           Worker name:                         14
-2022-08-26 14:09:10,381 - distributed.worker - INFO -          dashboard at:            127.0.0.1:38299
-2022-08-26 14:09:10,381 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37665
-2022-08-26 14:09:10,382 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:10,382 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:10,382 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:10,382 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-k447yozd
-2022-08-26 14:09:10,382 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:10,382 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33591
-2022-08-26 14:09:10,382 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33591
-2022-08-26 14:09:10,383 - distributed.worker - INFO -           Worker name:                         15
-2022-08-26 14:09:10,383 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36097
-2022-08-26 14:09:10,383 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37665
-2022-08-26 14:09:10,383 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:10,383 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:10,383 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:10,383 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-gm7bm9zm
-2022-08-26 14:09:10,383 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:10,384 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43461
-2022-08-26 14:09:10,384 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43461
-2022-08-26 14:09:10,384 - distributed.worker - INFO -           Worker name:                         16
-2022-08-26 14:09:10,384 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46213
-2022-08-26 14:09:10,384 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37665
-2022-08-26 14:09:10,384 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:10,384 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:10,384 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:10,384 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-we7iuna9
-2022-08-26 14:09:10,384 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:10,385 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39927
-2022-08-26 14:09:10,385 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39927
-2022-08-26 14:09:10,385 - distributed.worker - INFO -           Worker name:                         17
-2022-08-26 14:09:10,385 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33757
-2022-08-26 14:09:10,385 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37665
-2022-08-26 14:09:10,385 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:10,385 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:10,386 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:10,386 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-qtwz1ktz
-2022-08-26 14:09:10,386 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:10,386 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45767
-2022-08-26 14:09:10,386 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45767
-2022-08-26 14:09:10,386 - distributed.worker - INFO -           Worker name:                         18
-2022-08-26 14:09:10,387 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41493
-2022-08-26 14:09:10,387 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37665
-2022-08-26 14:09:10,387 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:10,387 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:10,387 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:10,387 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-d8p7f3wa
-2022-08-26 14:09:10,387 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:10,388 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45589
-2022-08-26 14:09:10,388 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45589
-2022-08-26 14:09:10,388 - distributed.worker - INFO -           Worker name:                         19
-2022-08-26 14:09:10,388 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39277
-2022-08-26 14:09:10,388 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37665
-2022-08-26 14:09:10,388 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:10,388 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:10,388 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:10,388 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-bj6fcju9
-2022-08-26 14:09:10,388 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:10,389 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37661
-2022-08-26 14:09:10,389 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37661
-2022-08-26 14:09:10,389 - distributed.worker - INFO -           Worker name:                         20
-2022-08-26 14:09:10,389 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35121
-2022-08-26 14:09:10,389 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37665
-2022-08-26 14:09:10,389 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:10,389 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:10,389 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:10,390 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-yis1yns6
-2022-08-26 14:09:10,390 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:10,390 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39557
-2022-08-26 14:09:10,390 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39557
-2022-08-26 14:09:10,390 - distributed.worker - INFO -           Worker name:                         21
-2022-08-26 14:09:10,390 - distributed.worker - INFO -          dashboard at:            127.0.0.1:38835
-2022-08-26 14:09:10,390 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37665
-2022-08-26 14:09:10,391 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:10,391 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:10,391 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:10,391 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ymfdyhfs
-2022-08-26 14:09:10,391 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:10,391 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35787
-2022-08-26 14:09:10,391 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35787
-2022-08-26 14:09:10,392 - distributed.worker - INFO -           Worker name:                         22
-2022-08-26 14:09:10,392 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42605
-2022-08-26 14:09:10,392 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37665
-2022-08-26 14:09:10,392 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:10,392 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:10,392 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:10,392 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ebjddzd8
-2022-08-26 14:09:10,392 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:10,393 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:46017
-2022-08-26 14:09:10,393 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:46017
-2022-08-26 14:09:10,393 - distributed.worker - INFO -           Worker name:                         23
-2022-08-26 14:09:10,393 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39983
-2022-08-26 14:09:10,393 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37665
-2022-08-26 14:09:10,393 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:10,393 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:10,393 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:10,393 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-nqw9hmkh
-2022-08-26 14:09:10,393 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:10,394 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:41561
-2022-08-26 14:09:10,394 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:41561
-2022-08-26 14:09:10,394 - distributed.worker - INFO -           Worker name:                         24
-2022-08-26 14:09:10,394 - distributed.worker - INFO -          dashboard at:            127.0.0.1:38837
-2022-08-26 14:09:10,394 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37665
-2022-08-26 14:09:10,394 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:10,394 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:10,395 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:10,395 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-pxu7f7gj
-2022-08-26 14:09:10,395 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:10,395 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:44041
-2022-08-26 14:09:10,395 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:44041
-2022-08-26 14:09:10,395 - distributed.worker - INFO -           Worker name:                         25
-2022-08-26 14:09:10,396 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43989
-2022-08-26 14:09:10,396 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37665
-2022-08-26 14:09:10,396 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:10,396 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:10,396 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:10,396 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-btxsm257
-2022-08-26 14:09:10,396 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:10,397 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45457
-2022-08-26 14:09:10,397 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45457
-2022-08-26 14:09:10,397 - distributed.worker - INFO -           Worker name:                         26
-2022-08-26 14:09:10,397 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33357
-2022-08-26 14:09:10,397 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37665
-2022-08-26 14:09:10,397 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:10,397 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:10,397 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:10,397 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-4_uqbryy
-2022-08-26 14:09:10,397 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:10,398 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35667
-2022-08-26 14:09:10,398 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35667
-2022-08-26 14:09:10,398 - distributed.worker - INFO -           Worker name:                         27
-2022-08-26 14:09:10,398 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46537
-2022-08-26 14:09:10,398 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37665
-2022-08-26 14:09:10,398 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:10,398 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:10,398 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:10,399 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-zw_bsmd8
-2022-08-26 14:09:10,399 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:10,399 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42037
-2022-08-26 14:09:10,399 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42037
-2022-08-26 14:09:10,399 - distributed.worker - INFO -           Worker name:                         28
-2022-08-26 14:09:10,399 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39413
-2022-08-26 14:09:10,399 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37665
-2022-08-26 14:09:10,400 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:10,400 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:10,400 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:10,400 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-dn8on4y2
-2022-08-26 14:09:10,400 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:10,400 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37991
-2022-08-26 14:09:10,400 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37991
-2022-08-26 14:09:10,401 - distributed.worker - INFO -           Worker name:                         29
-2022-08-26 14:09:10,401 - distributed.worker - INFO -          dashboard at:            127.0.0.1:44429
-2022-08-26 14:09:10,401 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37665
-2022-08-26 14:09:10,401 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:10,401 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:10,401 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:10,401 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-z0m_9qv1
-2022-08-26 14:09:10,401 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:10,430 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:43179', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:10,431 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:43179
-2022-08-26 14:09:10,431 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:10,431 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36075', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:10,431 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36075
-2022-08-26 14:09:10,431 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:10,432 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42317', name: 2, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:10,432 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42317
-2022-08-26 14:09:10,432 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:10,432 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:32959', name: 3, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:10,433 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:32959
-2022-08-26 14:09:10,433 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:10,433 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40107', name: 4, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:10,433 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40107
-2022-08-26 14:09:10,433 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:10,434 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37967', name: 5, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:10,434 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37967
-2022-08-26 14:09:10,434 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:10,434 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33877', name: 6, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:10,435 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33877
-2022-08-26 14:09:10,435 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:10,435 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37513', name: 7, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:10,435 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37513
-2022-08-26 14:09:10,435 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:10,436 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40063', name: 8, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:10,436 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40063
-2022-08-26 14:09:10,436 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:10,436 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37349', name: 9, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:10,437 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37349
-2022-08-26 14:09:10,437 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:10,437 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37965', name: 10, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:10,437 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37965
-2022-08-26 14:09:10,437 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:10,438 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45563', name: 11, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:10,438 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45563
-2022-08-26 14:09:10,438 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:10,439 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:44283', name: 12, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:10,439 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:44283
-2022-08-26 14:09:10,439 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:10,439 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:46159', name: 13, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:10,440 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:46159
-2022-08-26 14:09:10,440 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:10,440 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:43131', name: 14, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:10,440 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:43131
-2022-08-26 14:09:10,440 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:10,441 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33591', name: 15, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:10,441 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33591
-2022-08-26 14:09:10,441 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:10,441 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:43461', name: 16, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:10,442 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:43461
-2022-08-26 14:09:10,442 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:10,442 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39927', name: 17, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:10,442 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39927
-2022-08-26 14:09:10,442 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:10,443 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45767', name: 18, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:10,443 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45767
-2022-08-26 14:09:10,443 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:10,443 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45589', name: 19, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:10,444 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45589
-2022-08-26 14:09:10,444 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:10,444 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37661', name: 20, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:10,444 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37661
-2022-08-26 14:09:10,445 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:10,445 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39557', name: 21, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:10,445 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39557
-2022-08-26 14:09:10,445 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:10,446 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:35787', name: 22, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:10,446 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:35787
-2022-08-26 14:09:10,446 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:10,446 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:46017', name: 23, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:10,447 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:46017
-2022-08-26 14:09:10,447 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:10,447 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:41561', name: 24, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:10,447 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:41561
-2022-08-26 14:09:10,447 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:10,448 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:44041', name: 25, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:10,448 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:44041
-2022-08-26 14:09:10,448 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:10,448 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45457', name: 26, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:10,449 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45457
-2022-08-26 14:09:10,449 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:10,449 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:35667', name: 27, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:10,449 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:35667
-2022-08-26 14:09:10,450 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:10,450 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42037', name: 28, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:10,450 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42037
-2022-08-26 14:09:10,450 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:10,451 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37991', name: 29, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:10,451 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37991
-2022-08-26 14:09:10,451 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:10,452 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37665
-2022-08-26 14:09:10,452 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:10,453 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37665
-2022-08-26 14:09:10,453 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:10,453 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37665
-2022-08-26 14:09:10,453 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:10,453 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37665
-2022-08-26 14:09:10,453 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:10,454 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37665
-2022-08-26 14:09:10,454 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:10,454 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37665
-2022-08-26 14:09:10,454 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:10,454 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37665
-2022-08-26 14:09:10,454 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:10,455 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37665
-2022-08-26 14:09:10,455 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:10,455 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37665
-2022-08-26 14:09:10,455 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:10,455 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37665
-2022-08-26 14:09:10,455 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:10,456 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37665
-2022-08-26 14:09:10,456 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:10,456 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37665
-2022-08-26 14:09:10,456 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:10,456 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37665
-2022-08-26 14:09:10,456 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:10,457 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37665
-2022-08-26 14:09:10,457 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:10,457 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37665
-2022-08-26 14:09:10,457 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:10,457 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37665
-2022-08-26 14:09:10,457 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:10,458 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37665
-2022-08-26 14:09:10,458 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:10,458 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37665
-2022-08-26 14:09:10,458 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:10,458 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37665
-2022-08-26 14:09:10,458 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:10,459 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37665
-2022-08-26 14:09:10,459 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:10,459 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37665
-2022-08-26 14:09:10,460 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:10,460 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37665
-2022-08-26 14:09:10,460 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:10,460 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37665
-2022-08-26 14:09:10,460 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:10,460 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37665
-2022-08-26 14:09:10,461 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:10,461 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37665
-2022-08-26 14:09:10,461 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:10,461 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37665
-2022-08-26 14:09:10,461 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:10,461 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37665
-2022-08-26 14:09:10,462 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:10,462 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37665
-2022-08-26 14:09:10,462 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:10,462 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37665
-2022-08-26 14:09:10,462 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:10,463 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37665
-2022-08-26 14:09:10,463 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:10,464 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:10,464 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:10,464 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:10,464 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:10,464 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:10,464 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:10,464 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:10,464 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:10,464 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:10,464 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:10,464 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:10,464 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:10,465 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:10,465 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:10,465 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:10,465 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:10,465 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:10,465 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:10,465 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:10,465 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:10,465 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:10,465 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:10,465 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:10,465 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:10,465 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:10,466 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:10,466 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:10,466 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:10,466 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:10,466 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:10,482 - distributed.scheduler - INFO - Receive client connection: Client-545eb2ce-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:10,482 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:10,730 - distributed.scheduler - INFO - Remove client Client-545eb2ce-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:10,730 - distributed.scheduler - INFO - Remove client Client-545eb2ce-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:10,730 - distributed.scheduler - INFO - Close client connection: Client-545eb2ce-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:10,730 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:43179
-2022-08-26 14:09:10,731 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36075
-2022-08-26 14:09:10,731 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42317
-2022-08-26 14:09:10,732 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:32959
-2022-08-26 14:09:10,732 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:40107
-2022-08-26 14:09:10,732 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:37967
-2022-08-26 14:09:10,732 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:33877
-2022-08-26 14:09:10,733 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:37513
-2022-08-26 14:09:10,733 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:40063
-2022-08-26 14:09:10,733 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:37349
-2022-08-26 14:09:10,734 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:37965
-2022-08-26 14:09:10,734 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:45563
-2022-08-26 14:09:10,734 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:44283
-2022-08-26 14:09:10,735 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:46159
-2022-08-26 14:09:10,735 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:43131
-2022-08-26 14:09:10,735 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:33591
-2022-08-26 14:09:10,736 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:43461
-2022-08-26 14:09:10,736 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:39927
-2022-08-26 14:09:10,736 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:45767
-2022-08-26 14:09:10,737 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:45589
-2022-08-26 14:09:10,737 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:37661
-2022-08-26 14:09:10,737 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:39557
-2022-08-26 14:09:10,737 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:35787
-2022-08-26 14:09:10,738 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:46017
-2022-08-26 14:09:10,738 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:41561
-2022-08-26 14:09:10,738 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:44041
-2022-08-26 14:09:10,739 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:45457
-2022-08-26 14:09:10,739 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:35667
-2022-08-26 14:09:10,739 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42037
-2022-08-26 14:09:10,740 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:37991
-2022-08-26 14:09:10,747 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:43179', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:10,747 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:43179
-2022-08-26 14:09:10,747 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:36075', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:10,747 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:36075
-2022-08-26 14:09:10,748 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:42317', name: 2, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:10,748 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:42317
-2022-08-26 14:09:10,748 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:32959', name: 3, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:10,748 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:32959
-2022-08-26 14:09:10,748 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:40107', name: 4, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:10,748 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:40107
-2022-08-26 14:09:10,748 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:37967', name: 5, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:10,748 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:37967
-2022-08-26 14:09:10,749 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:33877', name: 6, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:10,749 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:33877
-2022-08-26 14:09:10,749 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:37513', name: 7, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:10,749 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:37513
-2022-08-26 14:09:10,749 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:40063', name: 8, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:10,749 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:40063
-2022-08-26 14:09:10,749 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:37349', name: 9, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:10,749 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:37349
-2022-08-26 14:09:10,750 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:37965', name: 10, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:10,750 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:37965
-2022-08-26 14:09:10,750 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:45563', name: 11, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:10,750 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:45563
-2022-08-26 14:09:10,750 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:44283', name: 12, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:10,750 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:44283
-2022-08-26 14:09:10,750 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:46159', name: 13, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:10,750 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:46159
-2022-08-26 14:09:10,750 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:43131', name: 14, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:10,751 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:43131
-2022-08-26 14:09:10,751 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:33591', name: 15, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:10,751 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:33591
-2022-08-26 14:09:10,751 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:43461', name: 16, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:10,751 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:43461
-2022-08-26 14:09:10,751 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:39927', name: 17, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:10,751 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:39927
-2022-08-26 14:09:10,751 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:45767', name: 18, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:10,751 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:45767
-2022-08-26 14:09:10,752 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:45589', name: 19, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:10,752 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:45589
-2022-08-26 14:09:10,752 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:37661', name: 20, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:10,752 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:37661
-2022-08-26 14:09:10,752 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:39557', name: 21, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:10,752 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:39557
-2022-08-26 14:09:10,752 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:35787', name: 22, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:10,752 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:35787
-2022-08-26 14:09:10,753 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:46017', name: 23, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:10,753 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:46017
-2022-08-26 14:09:10,753 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:41561', name: 24, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:10,753 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:41561
-2022-08-26 14:09:10,753 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:44041', name: 25, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:10,753 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:44041
-2022-08-26 14:09:10,753 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:45457', name: 26, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:10,753 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:45457
-2022-08-26 14:09:10,753 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:35667', name: 27, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:10,754 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:35667
-2022-08-26 14:09:10,754 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:42037', name: 28, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:10,754 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:42037
-2022-08-26 14:09:10,754 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:37991', name: 29, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:10,754 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:37991
-2022-08-26 14:09:10,754 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:09:10,754 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-163d21a7-84ec-4270-9732-cd1226aa3106 Address tcp://127.0.0.1:43179 Status: Status.closing
-2022-08-26 14:09:10,755 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-17d4b8f9-bce7-42d3-a264-0d83f88a0fda Address tcp://127.0.0.1:36075 Status: Status.closing
-2022-08-26 14:09:10,755 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-5db45b6f-3729-4cf0-be42-0e8bc2904b71 Address tcp://127.0.0.1:42317 Status: Status.closing
-2022-08-26 14:09:10,755 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-cb3b35b3-09e3-4b6d-a016-4aa08f6fd1f9 Address tcp://127.0.0.1:32959 Status: Status.closing
-2022-08-26 14:09:10,755 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-a2d9bd39-b353-48d9-a68c-7ee0798b5155 Address tcp://127.0.0.1:40107 Status: Status.closing
-2022-08-26 14:09:10,755 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-cb98eaf4-48b7-4cb2-b091-5b1f50a92f0e Address tcp://127.0.0.1:37967 Status: Status.closing
-2022-08-26 14:09:10,756 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-d2cdfd28-e442-4c03-a7ef-a1380961772a Address tcp://127.0.0.1:33877 Status: Status.closing
-2022-08-26 14:09:10,756 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-e80fabde-ad12-4f88-9535-35b49df026ff Address tcp://127.0.0.1:37513 Status: Status.closing
-2022-08-26 14:09:10,756 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-9b4972c8-898d-4f7a-a2d1-3776ae61b7c1 Address tcp://127.0.0.1:40063 Status: Status.closing
-2022-08-26 14:09:10,756 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-744d898f-a9c0-4922-8ae7-17dbdc8abdfe Address tcp://127.0.0.1:37349 Status: Status.closing
-2022-08-26 14:09:10,756 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-f9995cbb-7371-4b67-9d7f-d8e28d2c020c Address tcp://127.0.0.1:37965 Status: Status.closing
-2022-08-26 14:09:10,757 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-5088c7fa-de1f-4250-ae1e-c1dfc8acab09 Address tcp://127.0.0.1:45563 Status: Status.closing
-2022-08-26 14:09:10,757 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-dc24755c-4173-4b2d-a6f6-fb7165f18236 Address tcp://127.0.0.1:44283 Status: Status.closing
-2022-08-26 14:09:10,757 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-156b18fa-f04a-494b-a42a-b79aab9d1643 Address tcp://127.0.0.1:46159 Status: Status.closing
-2022-08-26 14:09:10,757 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-a12136fe-59bb-40a4-bbbc-6b5a2c5c1ae5 Address tcp://127.0.0.1:43131 Status: Status.closing
-2022-08-26 14:09:10,757 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-5b6af430-3040-499c-9829-b1d675f51759 Address tcp://127.0.0.1:33591 Status: Status.closing
-2022-08-26 14:09:10,758 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-13b9a32d-c193-4cfb-9363-00b493beaffc Address tcp://127.0.0.1:43461 Status: Status.closing
-2022-08-26 14:09:10,758 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-5bd7f581-28ea-48f9-b834-3044522d4740 Address tcp://127.0.0.1:39927 Status: Status.closing
-2022-08-26 14:09:10,758 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-6b1c05c2-8ff1-495d-be91-0466be38b6da Address tcp://127.0.0.1:45767 Status: Status.closing
-2022-08-26 14:09:10,758 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-01b94af9-4858-4d9b-9b66-6241eed29bbb Address tcp://127.0.0.1:45589 Status: Status.closing
-2022-08-26 14:09:10,758 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-19942054-105e-483e-8bd8-5ffb83b7d45d Address tcp://127.0.0.1:37661 Status: Status.closing
-2022-08-26 14:09:10,758 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-210fae9d-7a8f-4286-9982-2c84cb41ad51 Address tcp://127.0.0.1:39557 Status: Status.closing
-2022-08-26 14:09:10,759 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-e8216fb4-a765-4a9f-b45d-645269da214c Address tcp://127.0.0.1:35787 Status: Status.closing
-2022-08-26 14:09:10,759 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-0ef90309-3c78-4eb8-a9a2-2e66f294e47e Address tcp://127.0.0.1:46017 Status: Status.closing
-2022-08-26 14:09:10,759 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-e88351e5-8b29-4be4-8b6e-5a5b8261d965 Address tcp://127.0.0.1:41561 Status: Status.closing
-2022-08-26 14:09:10,759 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-8156f30d-d6a0-4624-b734-c2f7af1e8c27 Address tcp://127.0.0.1:44041 Status: Status.closing
-2022-08-26 14:09:10,759 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-73d2067b-1991-4191-9cfa-ccbbc5e40fcc Address tcp://127.0.0.1:45457 Status: Status.closing
-2022-08-26 14:09:10,760 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-e0eefdd9-bfa3-497d-b6c8-43a5ccbbdea5 Address tcp://127.0.0.1:35667 Status: Status.closing
-2022-08-26 14:09:10,760 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-cb0439b3-407a-41e4-b564-a444ff855f1d Address tcp://127.0.0.1:42037 Status: Status.closing
-2022-08-26 14:09:10,760 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-be416b78-6aef-4aaf-8aa6-9b1b8511dc48 Address tcp://127.0.0.1:37991 Status: Status.closing
-2022-08-26 14:09:10,770 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:10,770 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:10,986 - distributed.utils_perf - WARNING - full garbage collections took 79% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_balance_many_workers_2 2022-08-26 14:09:10,993 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:10,995 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:10,995 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:33179
-2022-08-26 14:09:10,995 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:33033
-2022-08-26 14:09:11,052 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37165
-2022-08-26 14:09:11,052 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37165
-2022-08-26 14:09:11,052 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:09:11,052 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39663
-2022-08-26 14:09:11,052 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33179
-2022-08-26 14:09:11,052 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:11,053 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:11,053 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:11,053 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-jxa1itkf
-2022-08-26 14:09:11,053 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:11,053 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33463
-2022-08-26 14:09:11,054 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33463
-2022-08-26 14:09:11,054 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:09:11,054 - distributed.worker - INFO -          dashboard at:            127.0.0.1:38295
-2022-08-26 14:09:11,054 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33179
-2022-08-26 14:09:11,054 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:11,054 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:11,054 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:11,054 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ralz98sr
-2022-08-26 14:09:11,054 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:11,055 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40621
-2022-08-26 14:09:11,055 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40621
-2022-08-26 14:09:11,055 - distributed.worker - INFO -           Worker name:                          2
-2022-08-26 14:09:11,055 - distributed.worker - INFO -          dashboard at:            127.0.0.1:44261
-2022-08-26 14:09:11,055 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33179
-2022-08-26 14:09:11,056 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:11,056 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:11,056 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:11,056 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-8e83_n6n
-2022-08-26 14:09:11,056 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:11,056 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43019
-2022-08-26 14:09:11,057 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43019
-2022-08-26 14:09:11,057 - distributed.worker - INFO -           Worker name:                          3
-2022-08-26 14:09:11,057 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45567
-2022-08-26 14:09:11,057 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33179
-2022-08-26 14:09:11,057 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:11,057 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:11,057 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:11,057 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-6y0tk706
-2022-08-26 14:09:11,057 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:11,058 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35641
-2022-08-26 14:09:11,058 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35641
-2022-08-26 14:09:11,058 - distributed.worker - INFO -           Worker name:                          4
-2022-08-26 14:09:11,058 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36693
-2022-08-26 14:09:11,058 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33179
-2022-08-26 14:09:11,059 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:11,059 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:11,059 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:11,059 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-1w1a3p57
-2022-08-26 14:09:11,059 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:11,059 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43889
-2022-08-26 14:09:11,060 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43889
-2022-08-26 14:09:11,060 - distributed.worker - INFO -           Worker name:                          5
-2022-08-26 14:09:11,060 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42193
-2022-08-26 14:09:11,060 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33179
-2022-08-26 14:09:11,060 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:11,060 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:11,060 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:11,060 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-s337dgce
-2022-08-26 14:09:11,060 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:11,061 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34483
-2022-08-26 14:09:11,061 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34483
-2022-08-26 14:09:11,061 - distributed.worker - INFO -           Worker name:                          6
-2022-08-26 14:09:11,061 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46139
-2022-08-26 14:09:11,061 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33179
-2022-08-26 14:09:11,062 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:11,062 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:11,062 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:11,062 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-iizl2jg6
-2022-08-26 14:09:11,062 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:11,062 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36379
-2022-08-26 14:09:11,063 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36379
-2022-08-26 14:09:11,063 - distributed.worker - INFO -           Worker name:                          7
-2022-08-26 14:09:11,063 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46297
-2022-08-26 14:09:11,063 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33179
-2022-08-26 14:09:11,063 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:11,063 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:11,063 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:11,063 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-q0gybn1e
-2022-08-26 14:09:11,063 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:11,064 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36539
-2022-08-26 14:09:11,064 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36539
-2022-08-26 14:09:11,064 - distributed.worker - INFO -           Worker name:                          8
-2022-08-26 14:09:11,064 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33883
-2022-08-26 14:09:11,064 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33179
-2022-08-26 14:09:11,065 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:11,065 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:11,065 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:11,065 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-bhtcfd0q
-2022-08-26 14:09:11,065 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:11,065 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33161
-2022-08-26 14:09:11,066 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33161
-2022-08-26 14:09:11,066 - distributed.worker - INFO -           Worker name:                          9
-2022-08-26 14:09:11,066 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43565
-2022-08-26 14:09:11,066 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33179
-2022-08-26 14:09:11,066 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:11,066 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:11,066 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:11,066 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-eswz9ibi
-2022-08-26 14:09:11,067 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:11,067 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43219
-2022-08-26 14:09:11,067 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43219
-2022-08-26 14:09:11,067 - distributed.worker - INFO -           Worker name:                         10
-2022-08-26 14:09:11,067 - distributed.worker - INFO -          dashboard at:            127.0.0.1:44269
-2022-08-26 14:09:11,067 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33179
-2022-08-26 14:09:11,068 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:11,068 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:11,068 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:11,068 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-wdhoi0uf
-2022-08-26 14:09:11,068 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:11,068 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38491
-2022-08-26 14:09:11,069 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38491
-2022-08-26 14:09:11,069 - distributed.worker - INFO -           Worker name:                         11
-2022-08-26 14:09:11,069 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39939
-2022-08-26 14:09:11,069 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33179
-2022-08-26 14:09:11,069 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:11,069 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:11,069 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:11,069 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ntycz7hx
-2022-08-26 14:09:11,070 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:11,070 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33131
-2022-08-26 14:09:11,070 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33131
-2022-08-26 14:09:11,070 - distributed.worker - INFO -           Worker name:                         12
-2022-08-26 14:09:11,070 - distributed.worker - INFO -          dashboard at:            127.0.0.1:38105
-2022-08-26 14:09:11,070 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33179
-2022-08-26 14:09:11,071 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:11,071 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:11,071 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:11,071 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-y70gw8ee
-2022-08-26 14:09:11,071 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:11,072 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34997
-2022-08-26 14:09:11,072 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34997
-2022-08-26 14:09:11,072 - distributed.worker - INFO -           Worker name:                         13
-2022-08-26 14:09:11,072 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39779
-2022-08-26 14:09:11,072 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33179
-2022-08-26 14:09:11,072 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:11,072 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:11,072 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:11,072 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-3k94vhvh
-2022-08-26 14:09:11,073 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:11,073 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36399
-2022-08-26 14:09:11,073 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36399
-2022-08-26 14:09:11,073 - distributed.worker - INFO -           Worker name:                         14
-2022-08-26 14:09:11,073 - distributed.worker - INFO -          dashboard at:            127.0.0.1:38301
-2022-08-26 14:09:11,073 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33179
-2022-08-26 14:09:11,074 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:11,074 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:11,074 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:11,074 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-m11krqpm
-2022-08-26 14:09:11,074 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:11,074 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:44313
-2022-08-26 14:09:11,075 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:44313
-2022-08-26 14:09:11,075 - distributed.worker - INFO -           Worker name:                         15
-2022-08-26 14:09:11,075 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45917
-2022-08-26 14:09:11,075 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33179
-2022-08-26 14:09:11,075 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:11,075 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:11,075 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:11,075 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-mn98lkzk
-2022-08-26 14:09:11,075 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:11,076 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45575
-2022-08-26 14:09:11,076 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45575
-2022-08-26 14:09:11,076 - distributed.worker - INFO -           Worker name:                         16
-2022-08-26 14:09:11,076 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36465
-2022-08-26 14:09:11,076 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33179
-2022-08-26 14:09:11,077 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:11,077 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:11,077 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:11,077 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-llv_8y8c
-2022-08-26 14:09:11,077 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:11,077 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:46387
-2022-08-26 14:09:11,078 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:46387
-2022-08-26 14:09:11,078 - distributed.worker - INFO -           Worker name:                         17
-2022-08-26 14:09:11,078 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46065
-2022-08-26 14:09:11,078 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33179
-2022-08-26 14:09:11,078 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:11,078 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:11,078 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:11,078 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-fuueld1i
-2022-08-26 14:09:11,078 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:11,079 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39051
-2022-08-26 14:09:11,079 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39051
-2022-08-26 14:09:11,079 - distributed.worker - INFO -           Worker name:                         18
-2022-08-26 14:09:11,079 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43497
-2022-08-26 14:09:11,079 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33179
-2022-08-26 14:09:11,080 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:11,080 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:11,080 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:11,080 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-i6fb4wbn
-2022-08-26 14:09:11,080 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:11,080 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39571
-2022-08-26 14:09:11,081 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39571
-2022-08-26 14:09:11,081 - distributed.worker - INFO -           Worker name:                         19
-2022-08-26 14:09:11,081 - distributed.worker - INFO -          dashboard at:            127.0.0.1:38071
-2022-08-26 14:09:11,081 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33179
-2022-08-26 14:09:11,081 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:11,081 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:11,081 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:11,081 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-v5o0tam1
-2022-08-26 14:09:11,081 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:11,082 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40945
-2022-08-26 14:09:11,082 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40945
-2022-08-26 14:09:11,082 - distributed.worker - INFO -           Worker name:                         20
-2022-08-26 14:09:11,082 - distributed.worker - INFO -          dashboard at:            127.0.0.1:44207
-2022-08-26 14:09:11,082 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33179
-2022-08-26 14:09:11,083 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:11,083 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:11,083 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:11,083 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-19bn81nb
-2022-08-26 14:09:11,083 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:11,083 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33407
-2022-08-26 14:09:11,084 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33407
-2022-08-26 14:09:11,084 - distributed.worker - INFO -           Worker name:                         21
-2022-08-26 14:09:11,084 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41641
-2022-08-26 14:09:11,084 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33179
-2022-08-26 14:09:11,084 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:11,084 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:11,084 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:11,084 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-p65ecgh4
-2022-08-26 14:09:11,084 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:11,085 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33355
-2022-08-26 14:09:11,085 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33355
-2022-08-26 14:09:11,085 - distributed.worker - INFO -           Worker name:                         22
-2022-08-26 14:09:11,085 - distributed.worker - INFO -          dashboard at:            127.0.0.1:38835
-2022-08-26 14:09:11,085 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33179
-2022-08-26 14:09:11,086 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:11,086 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:11,086 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:11,086 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-n4cbyvoh
-2022-08-26 14:09:11,086 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:11,086 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40097
-2022-08-26 14:09:11,087 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40097
-2022-08-26 14:09:11,087 - distributed.worker - INFO -           Worker name:                         23
-2022-08-26 14:09:11,087 - distributed.worker - INFO -          dashboard at:            127.0.0.1:44193
-2022-08-26 14:09:11,087 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33179
-2022-08-26 14:09:11,087 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:11,087 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:11,087 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:11,087 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-f5i9f4bq
-2022-08-26 14:09:11,087 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:11,088 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34909
-2022-08-26 14:09:11,088 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34909
-2022-08-26 14:09:11,088 - distributed.worker - INFO -           Worker name:                         24
-2022-08-26 14:09:11,088 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40983
-2022-08-26 14:09:11,088 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33179
-2022-08-26 14:09:11,088 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:11,089 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:11,089 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:11,089 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-s983ufkq
-2022-08-26 14:09:11,089 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:11,089 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40185
-2022-08-26 14:09:11,090 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40185
-2022-08-26 14:09:11,090 - distributed.worker - INFO -           Worker name:                         25
-2022-08-26 14:09:11,090 - distributed.worker - INFO -          dashboard at:            127.0.0.1:44025
-2022-08-26 14:09:11,090 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33179
-2022-08-26 14:09:11,090 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:11,090 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:11,090 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:11,090 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-dftxi9ao
-2022-08-26 14:09:11,090 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:11,091 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39929
-2022-08-26 14:09:11,091 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39929
-2022-08-26 14:09:11,091 - distributed.worker - INFO -           Worker name:                         26
-2022-08-26 14:09:11,091 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42495
-2022-08-26 14:09:11,091 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33179
-2022-08-26 14:09:11,091 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:11,092 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:11,092 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:11,092 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-f53ieism
-2022-08-26 14:09:11,092 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:11,092 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:41183
-2022-08-26 14:09:11,093 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:41183
-2022-08-26 14:09:11,093 - distributed.worker - INFO -           Worker name:                         27
-2022-08-26 14:09:11,093 - distributed.worker - INFO -          dashboard at:            127.0.0.1:44893
-2022-08-26 14:09:11,093 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33179
-2022-08-26 14:09:11,093 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:11,093 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:11,093 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:11,093 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-oq8dwhjw
-2022-08-26 14:09:11,093 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:11,094 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40391
-2022-08-26 14:09:11,094 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40391
-2022-08-26 14:09:11,094 - distributed.worker - INFO -           Worker name:                         28
-2022-08-26 14:09:11,094 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46291
-2022-08-26 14:09:11,094 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33179
-2022-08-26 14:09:11,094 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:11,095 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:11,095 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:11,095 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-bf7pchwa
-2022-08-26 14:09:11,095 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:11,095 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33983
-2022-08-26 14:09:11,096 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33983
-2022-08-26 14:09:11,096 - distributed.worker - INFO -           Worker name:                         29
-2022-08-26 14:09:11,096 - distributed.worker - INFO -          dashboard at:            127.0.0.1:44653
-2022-08-26 14:09:11,096 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33179
-2022-08-26 14:09:11,096 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:11,096 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:11,096 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:11,096 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-elvn7aqi
-2022-08-26 14:09:11,096 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:11,125 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37165', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:11,125 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37165
-2022-08-26 14:09:11,125 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:11,125 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33463', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:11,126 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33463
-2022-08-26 14:09:11,126 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:11,126 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40621', name: 2, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:11,126 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40621
-2022-08-26 14:09:11,126 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:11,127 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:43019', name: 3, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:11,127 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:43019
-2022-08-26 14:09:11,127 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:11,127 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:35641', name: 4, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:11,128 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:35641
-2022-08-26 14:09:11,128 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:11,128 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:43889', name: 5, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:11,128 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:43889
-2022-08-26 14:09:11,128 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:11,129 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:34483', name: 6, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:11,129 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:34483
-2022-08-26 14:09:11,129 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:11,129 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36379', name: 7, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:11,130 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36379
-2022-08-26 14:09:11,130 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:11,130 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36539', name: 8, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:11,130 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36539
-2022-08-26 14:09:11,130 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:11,131 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33161', name: 9, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:11,131 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33161
-2022-08-26 14:09:11,131 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:11,131 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:43219', name: 10, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:11,132 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:43219
-2022-08-26 14:09:11,132 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:11,132 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38491', name: 11, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:11,132 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38491
-2022-08-26 14:09:11,132 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:11,133 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33131', name: 12, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:11,133 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33131
-2022-08-26 14:09:11,133 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:11,133 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:34997', name: 13, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:11,134 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:34997
-2022-08-26 14:09:11,134 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:11,134 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36399', name: 14, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:11,134 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36399
-2022-08-26 14:09:11,134 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:11,135 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:44313', name: 15, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:11,135 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:44313
-2022-08-26 14:09:11,135 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:11,135 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45575', name: 16, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:11,136 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45575
-2022-08-26 14:09:11,136 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:11,136 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:46387', name: 17, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:11,136 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:46387
-2022-08-26 14:09:11,136 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:11,137 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39051', name: 18, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:11,137 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39051
-2022-08-26 14:09:11,137 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:11,137 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39571', name: 19, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:11,138 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39571
-2022-08-26 14:09:11,138 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:11,138 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40945', name: 20, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:11,138 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40945
-2022-08-26 14:09:11,138 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:11,139 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33407', name: 21, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:11,139 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33407
-2022-08-26 14:09:11,139 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:11,139 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33355', name: 22, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:11,140 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33355
-2022-08-26 14:09:11,140 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:11,140 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40097', name: 23, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:11,140 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40097
-2022-08-26 14:09:11,140 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:11,141 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:34909', name: 24, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:11,141 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:34909
-2022-08-26 14:09:11,141 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:11,141 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40185', name: 25, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:11,142 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40185
-2022-08-26 14:09:11,142 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:11,142 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39929', name: 26, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:11,142 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39929
-2022-08-26 14:09:11,142 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:11,143 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:41183', name: 27, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:11,143 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:41183
-2022-08-26 14:09:11,143 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:11,143 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40391', name: 28, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:11,144 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40391
-2022-08-26 14:09:11,144 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:11,144 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33983', name: 29, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:11,144 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33983
-2022-08-26 14:09:11,144 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:11,146 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33179
-2022-08-26 14:09:11,146 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:11,146 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33179
-2022-08-26 14:09:11,146 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:11,146 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33179
-2022-08-26 14:09:11,147 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:11,147 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33179
-2022-08-26 14:09:11,147 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:11,147 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33179
-2022-08-26 14:09:11,147 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:11,148 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33179
-2022-08-26 14:09:11,148 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:11,148 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33179
-2022-08-26 14:09:11,148 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:11,148 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33179
-2022-08-26 14:09:11,148 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:11,149 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33179
-2022-08-26 14:09:11,149 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:11,149 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33179
-2022-08-26 14:09:11,149 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:11,149 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33179
-2022-08-26 14:09:11,150 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:11,150 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33179
-2022-08-26 14:09:11,150 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:11,150 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33179
-2022-08-26 14:09:11,150 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:11,151 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33179
-2022-08-26 14:09:11,151 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:11,151 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33179
-2022-08-26 14:09:11,151 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:11,151 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33179
-2022-08-26 14:09:11,151 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:11,152 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33179
-2022-08-26 14:09:11,152 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:11,152 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33179
-2022-08-26 14:09:11,152 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:11,153 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33179
-2022-08-26 14:09:11,153 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:11,153 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33179
-2022-08-26 14:09:11,153 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:11,153 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33179
-2022-08-26 14:09:11,153 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:11,154 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33179
-2022-08-26 14:09:11,154 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:11,154 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33179
-2022-08-26 14:09:11,154 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:11,154 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33179
-2022-08-26 14:09:11,155 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:11,155 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33179
-2022-08-26 14:09:11,155 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:11,155 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33179
-2022-08-26 14:09:11,155 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:11,156 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33179
-2022-08-26 14:09:11,156 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:11,156 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33179
-2022-08-26 14:09:11,156 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:11,156 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33179
-2022-08-26 14:09:11,156 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:11,157 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33179
-2022-08-26 14:09:11,157 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:11,158 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:11,158 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:11,158 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:11,158 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:11,159 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:11,159 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:11,159 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:11,159 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:11,159 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:11,159 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:11,159 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:11,159 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:11,159 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:11,159 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:11,159 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:11,159 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:11,159 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:11,159 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:11,160 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:11,160 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:11,160 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:11,160 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:11,160 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:11,160 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:11,160 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:11,160 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:11,160 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:11,160 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:11,160 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:11,160 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:11,176 - distributed.scheduler - INFO - Receive client connection: Client-54c8a9d2-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:11,177 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:11,874 - distributed.scheduler - INFO - Remove client Client-54c8a9d2-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:11,875 - distributed.scheduler - INFO - Remove client Client-54c8a9d2-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:11,875 - distributed.scheduler - INFO - Close client connection: Client-54c8a9d2-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:11,876 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:37165
-2022-08-26 14:09:11,876 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:33463
-2022-08-26 14:09:11,876 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:40621
-2022-08-26 14:09:11,877 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:43019
-2022-08-26 14:09:11,877 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:35641
-2022-08-26 14:09:11,877 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:43889
-2022-08-26 14:09:11,878 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:34483
-2022-08-26 14:09:11,878 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36379
-2022-08-26 14:09:11,878 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36539
-2022-08-26 14:09:11,879 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:33161
-2022-08-26 14:09:11,879 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:43219
-2022-08-26 14:09:11,879 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:38491
-2022-08-26 14:09:11,880 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:33131
-2022-08-26 14:09:11,880 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:34997
-2022-08-26 14:09:11,880 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36399
-2022-08-26 14:09:11,881 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:44313
-2022-08-26 14:09:11,881 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:45575
-2022-08-26 14:09:11,881 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:46387
-2022-08-26 14:09:11,882 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:39051
-2022-08-26 14:09:11,882 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:39571
-2022-08-26 14:09:11,882 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:40945
-2022-08-26 14:09:11,883 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:33407
-2022-08-26 14:09:11,883 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:33355
-2022-08-26 14:09:11,883 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:40097
-2022-08-26 14:09:11,884 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:34909
-2022-08-26 14:09:11,884 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:40185
-2022-08-26 14:09:11,884 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:39929
-2022-08-26 14:09:11,885 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:41183
-2022-08-26 14:09:11,885 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:40391
-2022-08-26 14:09:11,885 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:33983
-2022-08-26 14:09:11,893 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:37165', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:11,893 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:37165
-2022-08-26 14:09:11,893 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:33463', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:11,893 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:33463
-2022-08-26 14:09:11,894 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:40621', name: 2, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:11,894 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:40621
-2022-08-26 14:09:11,894 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:43019', name: 3, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:11,894 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:43019
-2022-08-26 14:09:11,894 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:35641', name: 4, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:11,894 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:35641
-2022-08-26 14:09:11,894 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:43889', name: 5, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:11,894 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:43889
-2022-08-26 14:09:11,894 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:34483', name: 6, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:11,894 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:34483
-2022-08-26 14:09:11,895 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:36379', name: 7, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:11,895 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:36379
-2022-08-26 14:09:11,895 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:36539', name: 8, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:11,895 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:36539
-2022-08-26 14:09:11,895 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:33161', name: 9, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:11,895 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:33161
-2022-08-26 14:09:11,895 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:43219', name: 10, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:11,895 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:43219
-2022-08-26 14:09:11,895 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:38491', name: 11, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:11,896 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:38491
-2022-08-26 14:09:11,896 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:33131', name: 12, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:11,896 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:33131
-2022-08-26 14:09:11,896 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:34997', name: 13, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:11,896 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:34997
-2022-08-26 14:09:11,896 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:36399', name: 14, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:11,896 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:36399
-2022-08-26 14:09:11,896 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:44313', name: 15, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:11,896 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:44313
-2022-08-26 14:09:11,896 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:45575', name: 16, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:11,897 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:45575
-2022-08-26 14:09:11,897 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:46387', name: 17, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:11,897 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:46387
-2022-08-26 14:09:11,897 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:39051', name: 18, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:11,897 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:39051
-2022-08-26 14:09:11,897 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:39571', name: 19, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:11,897 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:39571
-2022-08-26 14:09:11,897 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:40945', name: 20, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:11,897 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:40945
-2022-08-26 14:09:11,898 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:33407', name: 21, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:11,898 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:33407
-2022-08-26 14:09:11,898 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:33355', name: 22, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:11,898 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:33355
-2022-08-26 14:09:11,898 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:40097', name: 23, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:11,898 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:40097
-2022-08-26 14:09:11,898 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:34909', name: 24, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:11,898 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:34909
-2022-08-26 14:09:11,898 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:40185', name: 25, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:11,898 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:40185
-2022-08-26 14:09:11,899 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:39929', name: 26, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:11,899 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:39929
-2022-08-26 14:09:11,899 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:41183', name: 27, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:11,899 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:41183
-2022-08-26 14:09:11,899 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:40391', name: 28, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:11,899 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:40391
-2022-08-26 14:09:11,899 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:33983', name: 29, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:11,899 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:33983
-2022-08-26 14:09:11,899 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:09:11,899 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-ec9788e8-02b9-4d09-84be-0d2c401cf37a Address tcp://127.0.0.1:37165 Status: Status.closing
-2022-08-26 14:09:11,900 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-49b1ebc8-7e5e-4674-a8d5-b9a4249abc53 Address tcp://127.0.0.1:33463 Status: Status.closing
-2022-08-26 14:09:11,900 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-ca463b1c-8f9d-41c2-8f00-341812f0f75f Address tcp://127.0.0.1:40621 Status: Status.closing
-2022-08-26 14:09:11,900 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-6d28efd8-fd5f-498d-9425-350089eadc8c Address tcp://127.0.0.1:43019 Status: Status.closing
-2022-08-26 14:09:11,901 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-1a8a5a27-35b6-45e8-80dd-223ea928453e Address tcp://127.0.0.1:35641 Status: Status.closing
-2022-08-26 14:09:11,901 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-9cd2e662-7a66-4cd4-9bca-0b02aa985a6b Address tcp://127.0.0.1:43889 Status: Status.closing
-2022-08-26 14:09:11,901 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-41037dfd-2f1e-45ed-8ccf-ff5716010df7 Address tcp://127.0.0.1:34483 Status: Status.closing
-2022-08-26 14:09:11,901 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-e9fa2c53-d347-4578-be1b-79e37db5bfe3 Address tcp://127.0.0.1:36379 Status: Status.closing
-2022-08-26 14:09:11,901 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-b552f209-a839-45b9-9463-8bb52fa6b0ab Address tcp://127.0.0.1:36539 Status: Status.closing
-2022-08-26 14:09:11,902 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-d5106327-f0d8-4dae-b138-b92e197ff320 Address tcp://127.0.0.1:33161 Status: Status.closing
-2022-08-26 14:09:11,902 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-9f4980c3-039a-4453-9344-6fb6e388430c Address tcp://127.0.0.1:43219 Status: Status.closing
-2022-08-26 14:09:11,902 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-2baf60af-eb88-486c-a5a2-5372c4fff4a5 Address tcp://127.0.0.1:38491 Status: Status.closing
-2022-08-26 14:09:11,902 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-a6bfe0d7-b8c6-4a42-b552-9cbad3d146e6 Address tcp://127.0.0.1:33131 Status: Status.closing
-2022-08-26 14:09:11,903 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-02fbb685-b6b3-4693-ae3b-b23c62ed7e5a Address tcp://127.0.0.1:34997 Status: Status.closing
-2022-08-26 14:09:11,903 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-29e691b6-0628-419d-96a6-e928b8956fdc Address tcp://127.0.0.1:36399 Status: Status.closing
-2022-08-26 14:09:11,903 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-f7495c9f-947c-46b8-b6a0-a490090893d6 Address tcp://127.0.0.1:44313 Status: Status.closing
-2022-08-26 14:09:11,903 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-e878abc3-5926-4243-b894-c38c29843b28 Address tcp://127.0.0.1:45575 Status: Status.closing
-2022-08-26 14:09:11,904 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-fc31f059-5543-41d1-a758-656daa8f69b0 Address tcp://127.0.0.1:46387 Status: Status.closing
-2022-08-26 14:09:11,904 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-581c64b8-76d4-455e-bd36-0da19072b8fa Address tcp://127.0.0.1:39051 Status: Status.closing
-2022-08-26 14:09:11,904 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-d61347a3-64f5-46e0-b94a-56294551a8dc Address tcp://127.0.0.1:39571 Status: Status.closing
-2022-08-26 14:09:11,904 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-94311ee5-24fb-45b7-9a8c-69b4452a9eb1 Address tcp://127.0.0.1:40945 Status: Status.closing
-2022-08-26 14:09:11,904 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-7324ae4a-4f53-4172-99be-962c1314c08b Address tcp://127.0.0.1:33407 Status: Status.closing
-2022-08-26 14:09:11,904 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-a1c863ad-eda8-41f1-b84c-511396f553df Address tcp://127.0.0.1:33355 Status: Status.closing
-2022-08-26 14:09:11,905 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-57505eb1-9570-425e-ba0d-e481d36a06f8 Address tcp://127.0.0.1:40097 Status: Status.closing
-2022-08-26 14:09:11,905 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-4ba5cad4-9ad2-416c-a04f-0e1f3893eac8 Address tcp://127.0.0.1:34909 Status: Status.closing
-2022-08-26 14:09:11,905 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-e3fae9b7-fd4b-4105-9160-103fc72f540d Address tcp://127.0.0.1:40185 Status: Status.closing
-2022-08-26 14:09:11,905 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-179e4072-1baa-4e0f-acd8-9f83e89e71cc Address tcp://127.0.0.1:39929 Status: Status.closing
-2022-08-26 14:09:11,906 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-17dd43f1-f30c-4847-b6a0-4107faa84d1e Address tcp://127.0.0.1:41183 Status: Status.closing
-2022-08-26 14:09:11,906 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-8ca090fb-5bec-4ec4-8704-2d41af67d0db Address tcp://127.0.0.1:40391 Status: Status.closing
-2022-08-26 14:09:11,906 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-ce8f9b17-2293-433b-9ca8-cdbe14732bdd Address tcp://127.0.0.1:33983 Status: Status.closing
-2022-08-26 14:09:11,917 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:11,918 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:12,141 - distributed.utils_perf - WARNING - full garbage collections took 76% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_learn_occupancy_multiple_workers 2022-08-26 14:09:12,149 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:12,151 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:12,151 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:45551
-2022-08-26 14:09:12,151 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:41869
-2022-08-26 14:09:12,156 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:46387
-2022-08-26 14:09:12,156 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:46387
-2022-08-26 14:09:12,156 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:09:12,156 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42761
-2022-08-26 14:09:12,156 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:45551
-2022-08-26 14:09:12,156 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:12,156 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:12,156 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:12,156 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-udccyiev
-2022-08-26 14:09:12,156 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:12,157 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40039
-2022-08-26 14:09:12,157 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40039
-2022-08-26 14:09:12,157 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:09:12,157 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45881
-2022-08-26 14:09:12,157 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:45551
-2022-08-26 14:09:12,157 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:12,157 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:09:12,157 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:12,158 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-92oqkho8
-2022-08-26 14:09:12,158 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:12,160 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:46387', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:12,161 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:46387
-2022-08-26 14:09:12,161 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:12,161 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40039', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:12,161 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40039
-2022-08-26 14:09:12,162 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:12,162 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:45551
-2022-08-26 14:09:12,162 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:12,162 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:45551
-2022-08-26 14:09:12,162 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:12,162 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:12,163 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:12,176 - distributed.scheduler - INFO - Receive client connection: Client-55615443-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:12,176 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:12,402 - distributed.scheduler - INFO - Remove client Client-55615443-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:12,402 - distributed.scheduler - INFO - Remove client Client-55615443-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:12,403 - distributed.scheduler - INFO - Close client connection: Client-55615443-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:12,403 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:46387
-2022-08-26 14:09:12,403 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:40039
-2022-08-26 14:09:12,405 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:46387', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:12,405 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:46387
-2022-08-26 14:09:12,405 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-0b05e4e7-bd5a-47f5-8ac7-f514a0c97cee Address tcp://127.0.0.1:46387 Status: Status.closing
-2022-08-26 14:09:12,405 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-57ff48e8-300f-479b-9b24-3c5fb3470481 Address tcp://127.0.0.1:40039 Status: Status.closing
-2022-08-26 14:09:12,406 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:40039', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:12,406 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:40039
-2022-08-26 14:09:12,406 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:09:12,604 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:12,604 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:12,819 - distributed.utils_perf - WARNING - full garbage collections took 76% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_include_communication_in_occupancy 2022-08-26 14:09:12,825 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:12,827 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:12,827 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:44675
-2022-08-26 14:09:12,827 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:40635
-2022-08-26 14:09:12,831 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38823
-2022-08-26 14:09:12,831 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38823
-2022-08-26 14:09:12,831 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:09:12,832 - distributed.worker - INFO -          dashboard at:            127.0.0.1:32901
-2022-08-26 14:09:12,832 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44675
-2022-08-26 14:09:12,832 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:12,832 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:12,832 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:12,832 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-3budxr5w
-2022-08-26 14:09:12,832 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:12,832 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:41053
-2022-08-26 14:09:12,832 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:41053
-2022-08-26 14:09:12,832 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:09:12,832 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41127
-2022-08-26 14:09:12,832 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44675
-2022-08-26 14:09:12,833 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:12,833 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:09:12,833 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:12,833 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-wxqcof82
-2022-08-26 14:09:12,833 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:12,836 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38823', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:12,836 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38823
-2022-08-26 14:09:12,836 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:12,836 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:41053', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:12,836 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:41053
-2022-08-26 14:09:12,837 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:12,837 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44675
-2022-08-26 14:09:12,837 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:12,837 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44675
-2022-08-26 14:09:12,837 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:12,837 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:12,837 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:12,851 - distributed.scheduler - INFO - Receive client connection: Client-55c84dc6-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:12,851 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:14,020 - distributed.scheduler - INFO - Remove client Client-55c84dc6-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:14,021 - distributed.scheduler - INFO - Remove client Client-55c84dc6-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:14,021 - distributed.scheduler - INFO - Close client connection: Client-55c84dc6-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:14,021 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:38823
-2022-08-26 14:09:14,022 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:41053
-2022-08-26 14:09:14,023 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:38823', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:14,023 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:38823
-2022-08-26 14:09:14,023 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:41053', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:14,023 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:41053
-2022-08-26 14:09:14,023 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:09:14,023 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-48038981-ff19-4d21-a5dc-b63fa6a54985 Address tcp://127.0.0.1:38823 Status: Status.closing
-2022-08-26 14:09:14,024 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-55b735bd-40d1-4ee3-a52c-7133b872013e Address tcp://127.0.0.1:41053 Status: Status.closing
-2022-08-26 14:09:14,025 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:14,025 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:14,236 - distributed.utils_perf - WARNING - full garbage collections took 75% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_new_worker_with_data_rejected 2022-08-26 14:09:14,242 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:14,243 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:14,243 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:44915
-2022-08-26 14:09:14,243 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:42377
-2022-08-26 14:09:14,248 - distributed.scheduler - ERROR - Worker 'tcp://127.0.0.1:34685' connected with 1 key(s) in memory! Worker reconnection is not supported. Keys: ['x']
-2022-08-26 14:09:14,248 - distributed.worker - ERROR - Unable to connect to scheduler: Worker 'tcp://127.0.0.1:34685' connected with 1 key(s) in memory! Worker reconnection is not supported. Keys: ['x']
-2022-08-26 14:09:14,249 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:14,250 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:14,456 - distributed.utils_perf - WARNING - full garbage collections took 76% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_worker_arrives_with_processing_data 2022-08-26 14:09:14,462 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:14,463 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:14,464 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:34767
-2022-08-26 14:09:14,464 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:39957
-2022-08-26 14:09:14,468 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36115
-2022-08-26 14:09:14,468 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36115
-2022-08-26 14:09:14,468 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:09:14,468 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35747
-2022-08-26 14:09:14,468 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34767
-2022-08-26 14:09:14,468 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:14,468 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:14,468 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:14,469 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-fhxr66qs
-2022-08-26 14:09:14,469 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:14,469 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43429
-2022-08-26 14:09:14,469 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43429
-2022-08-26 14:09:14,469 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:09:14,469 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46729
-2022-08-26 14:09:14,469 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34767
-2022-08-26 14:09:14,469 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:14,469 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:09:14,469 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:14,469 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ig0q24i0
-2022-08-26 14:09:14,469 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:14,472 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36115', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:14,473 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36115
-2022-08-26 14:09:14,473 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:14,473 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:43429', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:14,473 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:43429
-2022-08-26 14:09:14,473 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:14,474 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34767
-2022-08-26 14:09:14,474 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:14,474 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34767
-2022-08-26 14:09:14,474 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:14,474 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:14,474 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:14,488 - distributed.scheduler - INFO - Receive client connection: Client-56c20a75-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:14,488 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:14,513 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40139
-2022-08-26 14:09:14,513 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40139
-2022-08-26 14:09:14,513 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36365
-2022-08-26 14:09:14,513 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34767
-2022-08-26 14:09:14,513 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:14,514 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:14,514 - distributed.worker - INFO -                Memory:                   5.24 GiB
-2022-08-26 14:09:14,514 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-45oz4f3r
-2022-08-26 14:09:14,514 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:14,515 - distributed.scheduler - ERROR - Worker 'tcp://127.0.0.1:40139' connected with 1 key(s) in memory! Worker reconnection is not supported. Keys: ['slowinc-7c35b9ce-c1ee-41c8-b5c3-4809aed55df6']
-2022-08-26 14:09:14,516 - distributed.worker - ERROR - Unable to connect to scheduler: Worker 'tcp://127.0.0.1:40139' connected with 1 key(s) in memory! Worker reconnection is not supported. Keys: ['slowinc-7c35b9ce-c1ee-41c8-b5c3-4809aed55df6']
-2022-08-26 14:09:14,516 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:40139
-2022-08-26 14:09:15,720 - distributed.scheduler - INFO - Remove client Client-56c20a75-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:15,721 - distributed.scheduler - INFO - Remove client Client-56c20a75-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:15,721 - distributed.scheduler - INFO - Close client connection: Client-56c20a75-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:15,721 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36115
-2022-08-26 14:09:15,722 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:43429
-2022-08-26 14:09:15,722 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:36115', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:15,723 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:36115
-2022-08-26 14:09:15,723 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:43429', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:15,723 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:43429
-2022-08-26 14:09:15,723 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:09:15,723 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-8619eb33-d20a-45e5-98c0-2f5fb67f93c0 Address tcp://127.0.0.1:36115 Status: Status.closing
-2022-08-26 14:09:15,723 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-92eb7046-ceab-4520-8acc-f7e71722c938 Address tcp://127.0.0.1:43429 Status: Status.closing
-2022-08-26 14:09:15,724 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:15,724 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:15,931 - distributed.utils_perf - WARNING - full garbage collections took 76% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_run_on_scheduler_sync 2022-08-26 14:09:16,827 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:09:16,830 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:16,833 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:16,833 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:38357
-2022-08-26 14:09:16,833 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:09:16,849 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:46479
-2022-08-26 14:09:16,849 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:46479
-2022-08-26 14:09:16,849 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40641
-2022-08-26 14:09:16,849 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:38357
-2022-08-26 14:09:16,849 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:16,849 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:16,849 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:16,849 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-285idx2m
-2022-08-26 14:09:16,849 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:16,884 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:41521
-2022-08-26 14:09:16,884 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:41521
-2022-08-26 14:09:16,884 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39973
-2022-08-26 14:09:16,884 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:38357
-2022-08-26 14:09:16,884 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:16,884 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:16,884 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:16,884 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-0qkxy1jl
-2022-08-26 14:09:16,884 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:17,141 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:46479', status: init, memory: 0, processing: 0>
-2022-08-26 14:09:17,405 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:46479
-2022-08-26 14:09:17,406 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:17,406 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:38357
-2022-08-26 14:09:17,406 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:17,406 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:41521', status: init, memory: 0, processing: 0>
-2022-08-26 14:09:17,407 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:41521
-2022-08-26 14:09:17,407 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:17,407 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:17,407 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:38357
-2022-08-26 14:09:17,407 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:17,408 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:17,413 - distributed.scheduler - INFO - Receive client connection: Client-5880501a-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:17,413 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:17,414 - distributed.worker - INFO - Run out-of-band function 'f'
-2022-08-26 14:09:17,415 - distributed.worker - INFO - Run out-of-band function 'div'
-2022-08-26 14:09:17,415 - distributed.worker - WARNING - Run Failed
-Function: div
-args:     (1, 0)
-kwargs:   {}
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 3068, in run
-    result = function(*args, **kwargs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/utils_test.py", line 234, in div
-    return x / y
-ZeroDivisionError: division by zero
-2022-08-26 14:09:17,489 - distributed.scheduler - INFO - Remove client Client-5880501a-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:17,489 - distributed.scheduler - INFO - Remove client Client-5880501a-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:17,489 - distributed.scheduler - INFO - Close client connection: Client-5880501a-2583-11ed-a99d-00d861bc4509
-PASSED
-distributed/tests/test_scheduler.py::test_run_on_scheduler 2022-08-26 14:09:17,501 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:17,503 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:17,503 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:44705
-2022-08-26 14:09:17,503 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:44579
-2022-08-26 14:09:17,503 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-285idx2m', purging
-2022-08-26 14:09:17,504 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-0qkxy1jl', purging
-2022-08-26 14:09:17,508 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40321
-2022-08-26 14:09:17,508 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40321
-2022-08-26 14:09:17,508 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:09:17,508 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36911
-2022-08-26 14:09:17,508 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44705
-2022-08-26 14:09:17,508 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:17,508 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:17,508 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:17,508 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-nnbrkkaz
-2022-08-26 14:09:17,508 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:17,509 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:46247
-2022-08-26 14:09:17,509 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:46247
-2022-08-26 14:09:17,509 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:09:17,509 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36343
-2022-08-26 14:09:17,509 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44705
-2022-08-26 14:09:17,509 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:17,509 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:09:17,509 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:17,509 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-4ueer3ws
-2022-08-26 14:09:17,509 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:17,512 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40321', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:17,512 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40321
-2022-08-26 14:09:17,512 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:17,513 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:46247', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:17,513 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:46247
-2022-08-26 14:09:17,513 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:17,513 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44705
-2022-08-26 14:09:17,513 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:17,514 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44705
-2022-08-26 14:09:17,514 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:17,514 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:17,514 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:17,528 - distributed.scheduler - INFO - Receive client connection: Client-5891e107-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:17,528 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:17,529 - distributed.worker - INFO - Run out-of-band function 'f'
-2022-08-26 14:09:17,539 - distributed.scheduler - INFO - Remove client Client-5891e107-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:17,539 - distributed.scheduler - INFO - Remove client Client-5891e107-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:17,539 - distributed.scheduler - INFO - Close client connection: Client-5891e107-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:17,540 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:40321
-2022-08-26 14:09:17,540 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:46247
-2022-08-26 14:09:17,541 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:40321', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:17,541 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:40321
-2022-08-26 14:09:17,541 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:46247', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:17,541 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:46247
-2022-08-26 14:09:17,541 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:09:17,541 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-86398f8d-b4c9-44b4-bd8b-c2cadd5d0dad Address tcp://127.0.0.1:40321 Status: Status.closing
-2022-08-26 14:09:17,542 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-67471873-051e-4afb-8c9c-78323d9a13c6 Address tcp://127.0.0.1:46247 Status: Status.closing
-2022-08-26 14:09:17,542 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:17,543 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:17,749 - distributed.utils_perf - WARNING - full garbage collections took 76% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_run_on_scheduler_disabled 2022-08-26 14:09:17,755 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:17,756 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:17,757 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:43769
-2022-08-26 14:09:17,757 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:33469
-2022-08-26 14:09:17,761 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36461
-2022-08-26 14:09:17,761 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36461
-2022-08-26 14:09:17,761 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:09:17,761 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39073
-2022-08-26 14:09:17,761 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43769
-2022-08-26 14:09:17,761 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:17,761 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:17,761 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:17,761 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-9uzf_50s
-2022-08-26 14:09:17,761 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:17,762 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40643
-2022-08-26 14:09:17,762 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40643
-2022-08-26 14:09:17,762 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:09:17,762 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35719
-2022-08-26 14:09:17,762 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43769
-2022-08-26 14:09:17,762 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:17,762 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:09:17,762 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:17,762 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-iochiq20
-2022-08-26 14:09:17,762 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:17,765 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36461', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:17,765 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36461
-2022-08-26 14:09:17,765 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:17,766 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40643', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:17,766 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40643
-2022-08-26 14:09:17,766 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:17,766 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43769
-2022-08-26 14:09:17,766 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:17,767 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43769
-2022-08-26 14:09:17,767 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:17,767 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:17,767 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:17,781 - distributed.scheduler - INFO - Receive client connection: Client-58b880aa-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:17,781 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:17,782 - distributed.core - ERROR - Exception while handling op run_function
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 766, in _handle_comm
-    result = handler(comm, **msg)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/scheduler.py", line 6545, in run_function
-    raise ValueError(
-ValueError: Cannot run function as the scheduler has been explicitly disallowed from deserializing arbitrary bytestrings using pickle via the 'distributed.scheduler.pickle' configuration setting.
-2022-08-26 14:09:17,792 - distributed.scheduler - INFO - Remove client Client-58b880aa-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:17,792 - distributed.scheduler - INFO - Remove client Client-58b880aa-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:17,792 - distributed.scheduler - INFO - Close client connection: Client-58b880aa-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:17,793 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36461
-2022-08-26 14:09:17,793 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:40643
-2022-08-26 14:09:17,794 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:36461', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:17,794 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:36461
-2022-08-26 14:09:17,794 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:40643', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:17,794 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:40643
-2022-08-26 14:09:17,794 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:09:17,794 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-c1fb23b6-ca41-4495-8b5b-6f483f2ca844 Address tcp://127.0.0.1:36461 Status: Status.closing
-2022-08-26 14:09:17,795 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-6203ea7a-e083-40de-ab5b-911fb8e45577 Address tcp://127.0.0.1:40643 Status: Status.closing
-2022-08-26 14:09:17,796 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:17,796 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:18,002 - distributed.utils_perf - WARNING - full garbage collections took 76% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_close_worker 2022-08-26 14:09:18,008 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:18,009 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:18,009 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:42323
-2022-08-26 14:09:18,009 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:36659
-2022-08-26 14:09:18,014 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38797
-2022-08-26 14:09:18,014 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38797
-2022-08-26 14:09:18,014 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:09:18,014 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43523
-2022-08-26 14:09:18,014 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:42323
-2022-08-26 14:09:18,014 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:18,014 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:18,014 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:18,014 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-_f5q_6ve
-2022-08-26 14:09:18,014 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:18,015 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35201
-2022-08-26 14:09:18,015 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35201
-2022-08-26 14:09:18,015 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:09:18,015 - distributed.worker - INFO -          dashboard at:            127.0.0.1:44681
-2022-08-26 14:09:18,015 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:42323
-2022-08-26 14:09:18,015 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:18,015 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:09:18,015 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:18,015 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-r_9a_z7l
-2022-08-26 14:09:18,015 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:18,018 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38797', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:18,018 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38797
-2022-08-26 14:09:18,018 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:18,018 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:35201', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:18,019 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:35201
-2022-08-26 14:09:18,019 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:18,019 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:42323
-2022-08-26 14:09:18,019 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:18,019 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:42323
-2022-08-26 14:09:18,019 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:18,020 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:18,020 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:18,030 - distributed.scheduler - INFO - Closing worker tcp://127.0.0.1:38797
-2022-08-26 14:09:18,031 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:38797
-2022-08-26 14:09:18,031 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:38797', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:18,032 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:38797
-2022-08-26 14:09:18,032 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-1846cad7-9010-4e41-9a53-2c074c9e1aa8 Address tcp://127.0.0.1:38797 Status: Status.closing
-2022-08-26 14:09:18,243 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:35201
-2022-08-26 14:09:18,244 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:35201', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:18,244 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:35201
-2022-08-26 14:09:18,244 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:09:18,244 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-9fe585f6-0bbf-49bc-a6d2-5b2544111412 Address tcp://127.0.0.1:35201 Status: Status.closing
-2022-08-26 14:09:18,245 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:18,245 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:18,451 - distributed.utils_perf - WARNING - full garbage collections took 76% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_close_nanny 2022-08-26 14:09:18,457 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:18,458 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:18,458 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:42865
-2022-08-26 14:09:18,458 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:39921
-2022-08-26 14:09:18,464 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:33897'
-2022-08-26 14:09:18,464 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:36143'
-2022-08-26 14:09:19,144 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:32889
-2022-08-26 14:09:19,144 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:32889
-2022-08-26 14:09:19,144 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:09:19,144 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34429
-2022-08-26 14:09:19,144 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:42865
-2022-08-26 14:09:19,144 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:19,144 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:09:19,144 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:19,144 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-kh2fy1n_
-2022-08-26 14:09:19,144 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:19,148 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37807
-2022-08-26 14:09:19,148 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37807
-2022-08-26 14:09:19,148 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:09:19,148 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43301
-2022-08-26 14:09:19,148 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:42865
-2022-08-26 14:09:19,148 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:19,148 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:19,148 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:19,148 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-mknrq1gk
-2022-08-26 14:09:19,148 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:19,424 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37807', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:19,424 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37807
-2022-08-26 14:09:19,424 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:19,424 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:42865
-2022-08-26 14:09:19,424 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:19,425 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:19,436 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:32889', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:19,436 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:32889
-2022-08-26 14:09:19,436 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:19,436 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:42865
-2022-08-26 14:09:19,437 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:19,437 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:19,452 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:37807', name: 0, status: running, memory: 0, processing: 0>
-2022-08-26 14:09:19,452 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:37807
-2022-08-26 14:09:19,452 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:37807
-2022-08-26 14:09:19,454 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-ba743f25-553d-44f9-a45b-180bb9ead693 Address tcp://127.0.0.1:37807 Status: Status.closing
-2022-08-26 14:09:19,455 - distributed.nanny - INFO - Worker closed
-2022-08-26 14:09:19,455 - distributed.nanny - ERROR - Worker process died unexpectedly
-2022-08-26 14:09:19,580 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:33897'.
-2022-08-26 14:09:20,661 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:36143'.
-2022-08-26 14:09:20,661 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:09:20,661 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:32889
-2022-08-26 14:09:20,662 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-540a873b-42d0-412e-85a7-0be1e335eead Address tcp://127.0.0.1:32889 Status: Status.closing
-2022-08-26 14:09:20,662 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:32889', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:20,662 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:32889
-2022-08-26 14:09:20,663 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:09:20,786 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:20,787 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:20,993 - distributed.utils_perf - WARNING - full garbage collections took 76% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_retire_workers_close 2022-08-26 14:09:20,999 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:21,000 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:21,000 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:38563
-2022-08-26 14:09:21,001 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:42523
-2022-08-26 14:09:21,005 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33751
-2022-08-26 14:09:21,005 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33751
-2022-08-26 14:09:21,005 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:09:21,005 - distributed.worker - INFO -          dashboard at:            127.0.0.1:44973
-2022-08-26 14:09:21,005 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:38563
-2022-08-26 14:09:21,005 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:21,005 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:21,005 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:21,005 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-y2rrg_cy
-2022-08-26 14:09:21,005 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:21,006 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45283
-2022-08-26 14:09:21,006 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45283
-2022-08-26 14:09:21,006 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:09:21,006 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35379
-2022-08-26 14:09:21,006 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:38563
-2022-08-26 14:09:21,006 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:21,006 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:09:21,006 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:21,006 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-voh19wc5
-2022-08-26 14:09:21,006 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:21,009 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33751', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:21,010 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33751
-2022-08-26 14:09:21,010 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:21,010 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45283', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:21,010 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45283
-2022-08-26 14:09:21,010 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:21,011 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:38563
-2022-08-26 14:09:21,011 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:21,011 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:38563
-2022-08-26 14:09:21,011 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:21,011 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:21,011 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:21,025 - distributed.scheduler - INFO - Receive client connection: Client-5aa783d7-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:21,025 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:21,026 - distributed.scheduler - INFO - Retiring worker tcp://127.0.0.1:33751
-2022-08-26 14:09:21,026 - distributed.scheduler - INFO - Retiring worker tcp://127.0.0.1:45283
-2022-08-26 14:09:21,026 - distributed.active_memory_manager - INFO - Retiring worker tcp://127.0.0.1:33751; no unique keys need to be moved away.
-2022-08-26 14:09:21,026 - distributed.active_memory_manager - INFO - Retiring worker tcp://127.0.0.1:45283; no unique keys need to be moved away.
-2022-08-26 14:09:21,026 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:33751', name: 0, status: closing_gracefully, memory: 0, processing: 0>
-2022-08-26 14:09:21,026 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:33751
-2022-08-26 14:09:21,026 - distributed.scheduler - INFO - Retired worker tcp://127.0.0.1:33751
-2022-08-26 14:09:21,026 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:45283', name: 1, status: closing_gracefully, memory: 0, processing: 0>
-2022-08-26 14:09:21,026 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:45283
-2022-08-26 14:09:21,026 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:09:21,026 - distributed.scheduler - INFO - Retired worker tcp://127.0.0.1:45283
-2022-08-26 14:09:21,032 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:33751
-2022-08-26 14:09:21,033 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:45283
-2022-08-26 14:09:21,034 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-06720271-cab1-4e17-9d83-08d82cc70fe1 Address tcp://127.0.0.1:33751 Status: Status.closing
-2022-08-26 14:09:21,034 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-b5419bf8-d967-497f-99dc-c8865cb6e151 Address tcp://127.0.0.1:45283 Status: Status.closing
-2022-08-26 14:09:21,038 - distributed.scheduler - INFO - Remove client Client-5aa783d7-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:21,038 - distributed.scheduler - INFO - Remove client Client-5aa783d7-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:21,039 - distributed.scheduler - INFO - Close client connection: Client-5aa783d7-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:21,039 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:21,039 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:21,246 - distributed.utils_perf - WARNING - full garbage collections took 76% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_retire_nannies_close 2022-08-26 14:09:21,252 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:21,253 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:21,254 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:40047
-2022-08-26 14:09:21,254 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:40133
-2022-08-26 14:09:21,259 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:33117'
-2022-08-26 14:09:21,259 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:46721'
-2022-08-26 14:09:21,944 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38553
-2022-08-26 14:09:21,944 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38553
-2022-08-26 14:09:21,944 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:09:21,944 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34769
-2022-08-26 14:09:21,944 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:40047
-2022-08-26 14:09:21,944 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:21,944 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:09:21,944 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:21,944 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-yffdf94c
-2022-08-26 14:09:21,945 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:21,949 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:44945
-2022-08-26 14:09:21,949 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:44945
-2022-08-26 14:09:21,949 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:09:21,949 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45355
-2022-08-26 14:09:21,949 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:40047
-2022-08-26 14:09:21,949 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:21,949 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:21,949 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:21,949 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-882i9qt6
-2022-08-26 14:09:21,949 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:22,222 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:44945', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:22,222 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:44945
-2022-08-26 14:09:22,222 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:22,222 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:40047
-2022-08-26 14:09:22,223 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:22,223 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:22,235 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38553', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:22,236 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38553
-2022-08-26 14:09:22,236 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:22,236 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:40047
-2022-08-26 14:09:22,236 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:22,236 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:22,251 - distributed.scheduler - INFO - Receive client connection: Client-5b62a75d-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:22,252 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:22,252 - distributed.scheduler - INFO - Retiring worker tcp://127.0.0.1:38553
-2022-08-26 14:09:22,252 - distributed.scheduler - INFO - Retiring worker tcp://127.0.0.1:44945
-2022-08-26 14:09:22,252 - distributed.active_memory_manager - INFO - Retiring worker tcp://127.0.0.1:44945; no unique keys need to be moved away.
-2022-08-26 14:09:22,252 - distributed.active_memory_manager - INFO - Retiring worker tcp://127.0.0.1:38553; no unique keys need to be moved away.
-2022-08-26 14:09:22,253 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:38553', name: 1, status: closing_gracefully, memory: 0, processing: 0>
-2022-08-26 14:09:22,253 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:38553
-2022-08-26 14:09:22,253 - distributed.scheduler - INFO - Retired worker tcp://127.0.0.1:38553
-2022-08-26 14:09:22,253 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:44945', name: 0, status: closing_gracefully, memory: 0, processing: 0>
-2022-08-26 14:09:22,253 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:44945
-2022-08-26 14:09:22,253 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:09:22,253 - distributed.scheduler - INFO - Retired worker tcp://127.0.0.1:44945
-2022-08-26 14:09:22,258 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:38553
-2022-08-26 14:09:22,258 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:44945
-2022-08-26 14:09:22,260 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-b9fe67c1-7da1-418c-8c8b-8d4df7aa9a63 Address tcp://127.0.0.1:38553 Status: Status.closing
-2022-08-26 14:09:22,260 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-2368a2f3-6516-4b25-bab7-535da488693b Address tcp://127.0.0.1:44945 Status: Status.closing
-2022-08-26 14:09:22,261 - distributed.nanny - INFO - Worker closed
-2022-08-26 14:09:22,261 - distributed.nanny - ERROR - Worker process died unexpectedly
-2022-08-26 14:09:22,261 - distributed.nanny - INFO - Worker closed
-2022-08-26 14:09:22,262 - distributed.nanny - ERROR - Worker process died unexpectedly
-2022-08-26 14:09:22,390 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:46721'.
-2022-08-26 14:09:22,391 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:33117'.
-2022-08-26 14:09:22,406 - distributed.scheduler - INFO - Remove client Client-5b62a75d-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:22,406 - distributed.scheduler - INFO - Remove client Client-5b62a75d-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:22,406 - distributed.scheduler - INFO - Close client connection: Client-5b62a75d-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:22,407 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:22,407 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:22,613 - distributed.utils_perf - WARNING - full garbage collections took 76% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_fifo_submission 2022-08-26 14:09:22,619 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:22,621 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:22,621 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:37471
-2022-08-26 14:09:22,621 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:38983
-2022-08-26 14:09:22,624 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40023
-2022-08-26 14:09:22,624 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40023
-2022-08-26 14:09:22,624 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:09:22,624 - distributed.worker - INFO -          dashboard at:            127.0.0.1:37329
-2022-08-26 14:09:22,624 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37471
-2022-08-26 14:09:22,624 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:22,624 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:09:22,624 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:22,624 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-nxwqruyd
-2022-08-26 14:09:22,624 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:22,626 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40023', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:22,626 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40023
-2022-08-26 14:09:22,626 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:22,627 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37471
-2022-08-26 14:09:22,627 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:22,627 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:22,640 - distributed.scheduler - INFO - Receive client connection: Client-5b9e051b-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:22,641 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:23,685 - distributed.scheduler - INFO - Remove client Client-5b9e051b-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:23,685 - distributed.scheduler - INFO - Remove client Client-5b9e051b-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:23,685 - distributed.scheduler - INFO - Close client connection: Client-5b9e051b-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:23,685 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:40023
-2022-08-26 14:09:23,686 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:40023', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:23,686 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:40023
-2022-08-26 14:09:23,686 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:09:23,686 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-4a5ee7e5-d124-4be6-8392-dcf61c2e3570 Address tcp://127.0.0.1:40023 Status: Status.closing
-2022-08-26 14:09:23,687 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:23,687 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:23,895 - distributed.utils_perf - WARNING - full garbage collections took 76% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_scheduler_file 2022-08-26 14:09:23,920 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:23,922 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:23,922 - distributed.scheduler - INFO -   Scheduler at: tcp://192.168.1.159:34591
-2022-08-26 14:09:23,922 - distributed.scheduler - INFO -   dashboard at:                    :42809
-2022-08-26 14:09:23,926 - distributed.scheduler - INFO - Receive client connection: Client-5c622074-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:23,926 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:23,937 - distributed.scheduler - INFO - Remove client Client-5c622074-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:23,937 - distributed.scheduler - INFO - Remove client Client-5c622074-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:23,938 - distributed.scheduler - INFO - Close client connection: Client-5c622074-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:23,938 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:23,938 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_scheduler.py::test_dashboard_host[None-expect0-tcp://0.0.0.0] 2022-08-26 14:09:23,944 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:23,945 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:23,945 - distributed.scheduler - INFO -   Scheduler at: tcp://192.168.1.159:45385
-2022-08-26 14:09:23,945 - distributed.scheduler - INFO -   dashboard at:                     :8787
-2022-08-26 14:09:23,946 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:23,946 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_scheduler.py::test_dashboard_host[None-expect0-tcp://127.0.0.1] 2022-08-26 14:09:23,951 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:23,952 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:23,953 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:35713
-2022-08-26 14:09:23,953 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:09:23,953 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:23,953 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_scheduler.py::test_dashboard_host[None-expect0-tcp://127.0.0.1:38275] 2022-08-26 14:09:23,958 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:23,960 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:23,960 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:38275
-2022-08-26 14:09:23,960 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:09:23,960 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:23,960 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_scheduler.py::test_dashboard_host[127.0.0.1:0-expect1-tcp://0.0.0.0] 2022-08-26 14:09:23,986 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:23,988 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:23,988 - distributed.scheduler - INFO -   Scheduler at: tcp://192.168.1.159:41481
-2022-08-26 14:09:23,988 - distributed.scheduler - INFO -   dashboard at:                    :42017
-2022-08-26 14:09:23,989 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:23,989 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_scheduler.py::test_dashboard_host[127.0.0.1:0-expect1-tcp://127.0.0.1] 2022-08-26 14:09:24,014 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:24,015 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:24,015 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:35877
-2022-08-26 14:09:24,015 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:40001
-2022-08-26 14:09:24,016 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:24,016 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_scheduler.py::test_dashboard_host[127.0.0.1:0-expect1-tcp://127.0.0.1:38275] 2022-08-26 14:09:24,042 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:24,043 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:24,043 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:38275
-2022-08-26 14:09:24,043 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:34027
-2022-08-26 14:09:24,044 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:24,044 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_scheduler.py::test_profile_metadata 2022-08-26 14:09:24,049 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:24,051 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:24,051 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:33197
-2022-08-26 14:09:24,051 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:44523
-2022-08-26 14:09:24,055 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37723
-2022-08-26 14:09:24,055 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37723
-2022-08-26 14:09:24,055 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:09:24,055 - distributed.worker - INFO -          dashboard at:            127.0.0.1:38011
-2022-08-26 14:09:24,056 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33197
-2022-08-26 14:09:24,056 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:24,056 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:24,056 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:24,056 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-rxgvwuio
-2022-08-26 14:09:24,056 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:24,056 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45679
-2022-08-26 14:09:24,056 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45679
-2022-08-26 14:09:24,056 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:09:24,056 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41453
-2022-08-26 14:09:24,056 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33197
-2022-08-26 14:09:24,056 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:24,057 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:09:24,057 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:24,057 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-l14xpo3h
-2022-08-26 14:09:24,057 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:24,059 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37723', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:24,060 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37723
-2022-08-26 14:09:24,060 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:24,060 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45679', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:24,060 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45679
-2022-08-26 14:09:24,061 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:24,061 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33197
-2022-08-26 14:09:24,061 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:24,061 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33197
-2022-08-26 14:09:24,061 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:24,061 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:24,061 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:24,075 - distributed.scheduler - INFO - Receive client connection: Client-5c78f338-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:24,075 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:24,816 - distributed.scheduler - INFO - Remove client Client-5c78f338-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:24,817 - distributed.scheduler - INFO - Remove client Client-5c78f338-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:24,817 - distributed.scheduler - INFO - Close client connection: Client-5c78f338-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:24,817 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:37723
-2022-08-26 14:09:24,818 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:45679
-2022-08-26 14:09:24,819 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:37723', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:24,819 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:37723
-2022-08-26 14:09:24,819 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:45679', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:24,819 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:45679
-2022-08-26 14:09:24,819 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:09:24,819 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-ca0f49ac-4e9d-4af1-9bff-3186531faa36 Address tcp://127.0.0.1:37723 Status: Status.closing
-2022-08-26 14:09:24,820 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-b3d68958-e702-407b-9be5-ea7a7ac2c434 Address tcp://127.0.0.1:45679 Status: Status.closing
-2022-08-26 14:09:24,821 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:24,821 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:25,031 - distributed.utils_perf - WARNING - full garbage collections took 74% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_profile_metadata_timeout 2022-08-26 14:09:25,037 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:25,039 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:25,039 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:45423
-2022-08-26 14:09:25,039 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:44953
-2022-08-26 14:09:25,043 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35215
-2022-08-26 14:09:25,043 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35215
-2022-08-26 14:09:25,043 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:09:25,043 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36425
-2022-08-26 14:09:25,043 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:45423
-2022-08-26 14:09:25,044 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:25,044 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:25,044 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:25,044 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-qmiftm_l
-2022-08-26 14:09:25,044 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:25,044 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:46715
-2022-08-26 14:09:25,044 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:46715
-2022-08-26 14:09:25,044 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:09:25,044 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36141
-2022-08-26 14:09:25,044 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:45423
-2022-08-26 14:09:25,044 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:25,044 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:09:25,045 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:25,045 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-trjxcoqd
-2022-08-26 14:09:25,045 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:25,047 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:35215', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:25,048 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:35215
-2022-08-26 14:09:25,048 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:25,048 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:46715', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:25,048 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:46715
-2022-08-26 14:09:25,049 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:25,049 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:45423
-2022-08-26 14:09:25,049 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:25,049 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:45423
-2022-08-26 14:09:25,049 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:25,049 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:25,049 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:25,063 - distributed.scheduler - INFO - Receive client connection: Client-5d0fbd05-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:25,063 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:25,794 - distributed.core - ERROR - Exception while handling op profile_metadata
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 768, in _handle_comm
-    result = handler(**msg)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/tests/test_scheduler.py", line 1475, in raise_timeout
-    raise TimeoutError
-asyncio.exceptions.TimeoutError
-2022-08-26 14:09:25,806 - distributed.scheduler - INFO - Remove client Client-5d0fbd05-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:25,806 - distributed.scheduler - INFO - Remove client Client-5d0fbd05-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:25,807 - distributed.scheduler - INFO - Close client connection: Client-5d0fbd05-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:25,807 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:35215
-2022-08-26 14:09:25,807 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:46715
-2022-08-26 14:09:25,808 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:35215', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:25,808 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:35215
-2022-08-26 14:09:25,808 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:46715', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:25,808 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:46715
-2022-08-26 14:09:25,809 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:09:25,809 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-267c2e7e-ed8e-46aa-abdf-b345ae55ec1d Address tcp://127.0.0.1:35215 Status: Status.closing
-2022-08-26 14:09:25,809 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-470c133f-7934-4a0d-8b9e-17f01e72a1e1 Address tcp://127.0.0.1:46715 Status: Status.closing
-2022-08-26 14:09:25,810 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:25,811 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:26,019 - distributed.utils_perf - WARNING - full garbage collections took 74% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_profile_metadata_keys 2022-08-26 14:09:26,035 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:26,036 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:26,037 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:38773
-2022-08-26 14:09:26,037 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:34241
-2022-08-26 14:09:26,041 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:44409
-2022-08-26 14:09:26,041 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:44409
-2022-08-26 14:09:26,041 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:09:26,041 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41225
-2022-08-26 14:09:26,041 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:38773
-2022-08-26 14:09:26,041 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:26,041 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:26,041 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:26,042 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-_8yo67hm
-2022-08-26 14:09:26,042 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:26,042 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38177
-2022-08-26 14:09:26,042 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38177
-2022-08-26 14:09:26,042 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:09:26,042 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33029
-2022-08-26 14:09:26,042 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:38773
-2022-08-26 14:09:26,042 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:26,042 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:09:26,042 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:26,042 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-m31ki1eg
-2022-08-26 14:09:26,043 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:26,045 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:44409', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:26,046 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:44409
-2022-08-26 14:09:26,046 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:26,046 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38177', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:26,046 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38177
-2022-08-26 14:09:26,046 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:26,047 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:38773
-2022-08-26 14:09:26,047 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:26,047 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:38773
-2022-08-26 14:09:26,047 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:26,047 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:26,047 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:26,061 - distributed.scheduler - INFO - Receive client connection: Client-5da800a0-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:26,061 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:26,452 - distributed.scheduler - INFO - Remove client Client-5da800a0-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:26,452 - distributed.scheduler - INFO - Remove client Client-5da800a0-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:26,453 - distributed.scheduler - INFO - Close client connection: Client-5da800a0-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:26,453 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:44409
-2022-08-26 14:09:26,453 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:38177
-2022-08-26 14:09:26,454 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:44409', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:26,454 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:44409
-2022-08-26 14:09:26,455 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:38177', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:26,455 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:38177
-2022-08-26 14:09:26,455 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:09:26,455 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-cb3f86cc-d0aa-4e91-927d-73d6060a963f Address tcp://127.0.0.1:44409 Status: Status.closing
-2022-08-26 14:09:26,455 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-30a19681-3f97-4c6b-8172-5ea22fe99785 Address tcp://127.0.0.1:38177 Status: Status.closing
-2022-08-26 14:09:26,457 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:26,457 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:26,666 - distributed.utils_perf - WARNING - full garbage collections took 73% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_statistical_profiling 2022-08-26 14:09:26,682 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:26,683 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:26,684 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:37255
-2022-08-26 14:09:26,684 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:44799
-2022-08-26 14:09:26,688 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40561
-2022-08-26 14:09:26,688 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40561
-2022-08-26 14:09:26,688 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:09:26,688 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43855
-2022-08-26 14:09:26,688 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37255
-2022-08-26 14:09:26,688 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:26,688 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:26,688 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:26,688 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ey8z7dqu
-2022-08-26 14:09:26,689 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:26,689 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:46211
-2022-08-26 14:09:26,689 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:46211
-2022-08-26 14:09:26,689 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:09:26,689 - distributed.worker - INFO -          dashboard at:            127.0.0.1:37755
-2022-08-26 14:09:26,689 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37255
-2022-08-26 14:09:26,689 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:26,689 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:09:26,689 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:26,689 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-jwnax7vj
-2022-08-26 14:09:26,690 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:26,692 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40561', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:26,693 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40561
-2022-08-26 14:09:26,693 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:26,693 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:46211', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:26,693 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:46211
-2022-08-26 14:09:26,693 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:26,694 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37255
-2022-08-26 14:09:26,694 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:26,694 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37255
-2022-08-26 14:09:26,694 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:26,694 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:26,694 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:26,708 - distributed.scheduler - INFO - Receive client connection: Client-5e0ab98d-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:26,709 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:27,143 - distributed.scheduler - INFO - Remove client Client-5e0ab98d-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:27,143 - distributed.scheduler - INFO - Remove client Client-5e0ab98d-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:27,143 - distributed.scheduler - INFO - Close client connection: Client-5e0ab98d-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:27,143 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:40561
-2022-08-26 14:09:27,144 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:46211
-2022-08-26 14:09:27,145 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:40561', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:27,145 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:40561
-2022-08-26 14:09:27,145 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:46211', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:27,145 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:46211
-2022-08-26 14:09:27,145 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:09:27,145 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-33e7386e-efb6-49de-b7b3-10fc29386cb7 Address tcp://127.0.0.1:40561 Status: Status.closing
-2022-08-26 14:09:27,146 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-41e923a8-7e52-464e-8071-db108fe1aba0 Address tcp://127.0.0.1:46211 Status: Status.closing
-2022-08-26 14:09:27,147 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:27,147 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:27,357 - distributed.utils_perf - WARNING - full garbage collections took 73% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_statistical_profiling_failure 2022-08-26 14:09:27,373 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:27,374 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:27,374 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:44227
-2022-08-26 14:09:27,374 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:39351
-2022-08-26 14:09:27,379 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:41595
-2022-08-26 14:09:27,379 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:41595
-2022-08-26 14:09:27,379 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:09:27,379 - distributed.worker - INFO -          dashboard at:            127.0.0.1:44793
-2022-08-26 14:09:27,379 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44227
-2022-08-26 14:09:27,379 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:27,379 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:27,379 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:27,379 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-rispz8l9
-2022-08-26 14:09:27,379 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:27,380 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:46643
-2022-08-26 14:09:27,380 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:46643
-2022-08-26 14:09:27,380 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:09:27,380 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43455
-2022-08-26 14:09:27,380 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44227
-2022-08-26 14:09:27,380 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:27,380 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:09:27,380 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:27,380 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-jn1hlpal
-2022-08-26 14:09:27,380 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:27,383 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:41595', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:27,384 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:41595
-2022-08-26 14:09:27,384 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:27,384 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:46643', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:27,384 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:46643
-2022-08-26 14:09:27,384 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:27,385 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44227
-2022-08-26 14:09:27,385 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:27,385 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44227
-2022-08-26 14:09:27,385 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:27,385 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:27,385 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:27,399 - distributed.scheduler - INFO - Receive client connection: Client-5e74278a-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:27,400 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:27,823 - distributed.core - ERROR - Exception while handling op profile
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 768, in _handle_comm
-    result = handler(**msg)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/tests/test_scheduler.py", line 1539, in raise_timeout
-    raise TimeoutError
-asyncio.exceptions.TimeoutError
-2022-08-26 14:09:27,836 - distributed.scheduler - INFO - Remove client Client-5e74278a-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:27,836 - distributed.scheduler - INFO - Remove client Client-5e74278a-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:27,836 - distributed.scheduler - INFO - Close client connection: Client-5e74278a-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:27,837 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:41595
-2022-08-26 14:09:27,837 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:46643
-2022-08-26 14:09:27,839 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:41595', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:27,839 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:41595
-2022-08-26 14:09:27,839 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:46643', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:27,839 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:46643
-2022-08-26 14:09:27,839 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:09:27,839 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-251b0af4-9d7b-4731-9251-338597555c11 Address tcp://127.0.0.1:41595 Status: Status.closing
-2022-08-26 14:09:27,840 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-0d80bdd1-ccb2-4e85-bf6a-a38b29036461 Address tcp://127.0.0.1:46643 Status: Status.closing
-2022-08-26 14:09:27,841 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:27,841 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:28,052 - distributed.utils_perf - WARNING - full garbage collections took 72% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_cancel_fire_and_forget 2022-08-26 14:09:28,068 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:28,069 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:28,069 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:45973
-2022-08-26 14:09:28,069 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:44175
-2022-08-26 14:09:28,074 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42045
-2022-08-26 14:09:28,074 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42045
-2022-08-26 14:09:28,074 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:09:28,074 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45417
-2022-08-26 14:09:28,074 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:45973
-2022-08-26 14:09:28,074 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:28,074 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:28,074 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:28,074 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-_u5wgh1_
-2022-08-26 14:09:28,074 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:28,075 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:44747
-2022-08-26 14:09:28,075 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:44747
-2022-08-26 14:09:28,075 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:09:28,075 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41439
-2022-08-26 14:09:28,075 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:45973
-2022-08-26 14:09:28,075 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:28,075 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:09:28,075 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:28,075 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-15ln9c7b
-2022-08-26 14:09:28,075 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:28,078 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42045', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:28,078 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42045
-2022-08-26 14:09:28,079 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:28,079 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:44747', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:28,079 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:44747
-2022-08-26 14:09:28,079 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:28,079 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:45973
-2022-08-26 14:09:28,080 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:28,080 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:45973
-2022-08-26 14:09:28,080 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:28,080 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:28,080 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:28,094 - distributed.scheduler - INFO - Receive client connection: Client-5ede2631-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:28,094 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:28,113 - distributed.scheduler - INFO - Client Client-5ede2631-2583-11ed-a99d-00d861bc4509 requests to cancel 1 keys
-2022-08-26 14:09:28,113 - distributed.scheduler - INFO - Scheduler cancels key z.  Force=True
-2022-08-26 14:09:28,125 - distributed.scheduler - INFO - Remove client Client-5ede2631-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:28,125 - distributed.scheduler - INFO - Remove client Client-5ede2631-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:28,126 - distributed.scheduler - INFO - Close client connection: Client-5ede2631-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:28,126 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42045
-2022-08-26 14:09:28,126 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:44747
-2022-08-26 14:09:28,127 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:42045', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:28,127 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:42045
-2022-08-26 14:09:28,128 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:44747', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:28,128 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:44747
-2022-08-26 14:09:28,128 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:09:28,128 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-b409d388-f27c-46de-a0a7-2d7dcc7b3c7c Address tcp://127.0.0.1:42045 Status: Status.closing
-2022-08-26 14:09:28,128 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-72854378-b034-4620-b888-ea0d6733362c Address tcp://127.0.0.1:44747 Status: Status.closing
-2022-08-26 14:09:28,129 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:28,129 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:28,337 - distributed.utils_perf - WARNING - full garbage collections took 72% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_log_tasks_during_restart 2022-08-26 14:09:28,343 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:28,345 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:28,345 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:34457
-2022-08-26 14:09:28,345 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:45391
-2022-08-26 14:09:28,350 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:36771'
-2022-08-26 14:09:28,350 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:44691'
-2022-08-26 14:09:29,039 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37481
-2022-08-26 14:09:29,039 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37481
-2022-08-26 14:09:29,039 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:09:29,039 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41985
-2022-08-26 14:09:29,039 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34457
-2022-08-26 14:09:29,039 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:29,039 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:09:29,039 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45961
-2022-08-26 14:09:29,039 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:29,039 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45961
-2022-08-26 14:09:29,039 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-jfvpgtq7
-2022-08-26 14:09:29,039 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:09:29,039 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:29,039 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41971
-2022-08-26 14:09:29,039 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34457
-2022-08-26 14:09:29,039 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:29,039 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:29,039 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:29,039 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-xb9xnomf
-2022-08-26 14:09:29,039 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:29,312 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45961', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:29,313 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45961
-2022-08-26 14:09:29,313 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:29,313 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34457
-2022-08-26 14:09:29,313 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:29,313 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:29,331 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37481', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:29,331 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37481
-2022-08-26 14:09:29,331 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:29,331 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34457
-2022-08-26 14:09:29,332 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:29,332 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:29,343 - distributed.scheduler - INFO - Receive client connection: Client-5f9cb7f3-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:29,343 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:29,356 - distributed.nanny - ERROR - Worker process died unexpectedly
-2022-08-26 14:09:29,478 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:37481', name: 1, status: running, memory: 0, processing: 1>
-2022-08-26 14:09:29,478 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:37481
-2022-08-26 14:09:29,479 - distributed.nanny - WARNING - Restarting worker
-2022-08-26 14:09:29,480 - distributed.nanny - ERROR - Worker process died unexpectedly
-2022-08-26 14:09:29,613 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:45961', name: 0, status: running, memory: 0, processing: 1>
-2022-08-26 14:09:29,613 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:45961
-2022-08-26 14:09:29,614 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:09:29,615 - distributed.nanny - WARNING - Restarting worker
-2022-08-26 14:09:30,168 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37847
-2022-08-26 14:09:30,168 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37847
-2022-08-26 14:09:30,168 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:09:30,168 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43963
-2022-08-26 14:09:30,168 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34457
-2022-08-26 14:09:30,168 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:30,168 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:09:30,168 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:30,168 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-w2_grpq1
-2022-08-26 14:09:30,168 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:30,304 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34239
-2022-08-26 14:09:30,304 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34239
-2022-08-26 14:09:30,304 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:09:30,304 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42119
-2022-08-26 14:09:30,304 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34457
-2022-08-26 14:09:30,305 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:30,305 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:30,305 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:30,305 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-vit2x40z
-2022-08-26 14:09:30,305 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:30,462 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37847', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:30,463 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37847
-2022-08-26 14:09:30,463 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:30,463 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34457
-2022-08-26 14:09:30,463 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:30,463 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:30,465 - distributed.nanny - ERROR - Worker process died unexpectedly
-2022-08-26 14:09:30,581 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:34239', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:30,581 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:34239
-2022-08-26 14:09:30,582 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:30,582 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34457
-2022-08-26 14:09:30,582 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:30,582 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:30,593 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:37847', name: 1, status: running, memory: 0, processing: 1>
-2022-08-26 14:09:30,593 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:37847
-2022-08-26 14:09:30,594 - distributed.nanny - WARNING - Restarting worker
-2022-08-26 14:09:30,720 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:34239', name: 0, status: running, memory: 0, processing: 1>
-2022-08-26 14:09:30,720 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:34239
-2022-08-26 14:09:30,720 - distributed.scheduler - INFO - Task exit-c82ea2d1331521a93741b191018ec492 marked as failed because 3 workers died while trying to run it
-2022-08-26 14:09:30,720 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:09:30,723 - distributed.nanny - WARNING - Restarting worker
-2022-08-26 14:09:30,734 - distributed.scheduler - INFO - Remove client Client-5f9cb7f3-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:30,735 - distributed.scheduler - INFO - Remove client Client-5f9cb7f3-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:30,735 - distributed.scheduler - INFO - Close client connection: Client-5f9cb7f3-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:30,735 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:36771'.
-2022-08-26 14:09:30,735 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:09:30,735 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:44691'.
-2022-08-26 14:09:30,736 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:09:30,736 - distributed.nanny - ERROR - Error in Nanny killing Worker subprocess
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/nanny.py", line 595, in close
-    await self.kill(timeout=timeout)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/nanny.py", line 386, in kill
-    await self.process.kill(timeout=0.8 * (deadline - time()))
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/nanny.py", line 796, in kill
-    await process.join(wait_timeout)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/process.py", line 311, in join
-    assert self._state.pid is not None, "can only join a started process"
-AssertionError: can only join a started process
-2022-08-26 14:09:30,753 - tornado.application - ERROR - Exception in callback functools.partial(<built-in method set_result of _asyncio.Future object at 0x564040da61c0>, None)
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/tornado/ioloop.py", line 740, in _run_callback
-    ret = callback()
-asyncio.exceptions.InvalidStateError: invalid state
-2022-08-26 14:09:31,280 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:46809
-2022-08-26 14:09:31,280 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:46809
-2022-08-26 14:09:31,280 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:09:31,280 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33181
-2022-08-26 14:09:31,280 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34457
-2022-08-26 14:09:31,280 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:31,280 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:09:31,280 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:31,280 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-fpyqkc_n
-2022-08-26 14:09:31,280 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:31,281 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:46809
-2022-08-26 14:09:31,281 - distributed.worker - INFO - Closed worker has not yet started: Status.init
-2022-08-26 14:09:31,422 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34537
-2022-08-26 14:09:31,422 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34537
-2022-08-26 14:09:31,422 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:09:31,422 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42899
-2022-08-26 14:09:31,422 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34457
-2022-08-26 14:09:31,422 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:31,422 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:31,422 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:31,422 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-hac27o7j
-2022-08-26 14:09:31,422 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:31,422 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:34537
-2022-08-26 14:09:31,422 - distributed.worker - INFO - Closed worker has not yet started: Status.init
-2022-08-26 14:09:31,553 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:46809', name: 1, status: closed, memory: 0, processing: 0>
-2022-08-26 14:09:31,553 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:46809
-2022-08-26 14:09:31,553 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:31,677 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:46809', name: 1, status: closed, memory: 0, processing: 0>
-2022-08-26 14:09:31,678 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:46809
-2022-08-26 14:09:31,678 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:09:31,678 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:31,678 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:31,703 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34457
-2022-08-26 14:09:31,891 - distributed.utils_perf - WARNING - full garbage collections took 71% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_get_task_status 2022-08-26 14:09:31,897 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:31,899 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:31,899 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:34533
-2022-08-26 14:09:31,899 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:46341
-2022-08-26 14:09:31,904 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40373
-2022-08-26 14:09:31,904 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40373
-2022-08-26 14:09:31,904 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:09:31,904 - distributed.worker - INFO -          dashboard at:            127.0.0.1:38399
-2022-08-26 14:09:31,904 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34533
-2022-08-26 14:09:31,904 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:31,904 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:31,904 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:31,904 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-b6uyvz8g
-2022-08-26 14:09:31,904 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:31,904 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:41099
-2022-08-26 14:09:31,905 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:41099
-2022-08-26 14:09:31,905 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:09:31,905 - distributed.worker - INFO -          dashboard at:            127.0.0.1:37707
-2022-08-26 14:09:31,905 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34533
-2022-08-26 14:09:31,905 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:31,905 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:09:31,905 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:31,905 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-0y27e__4
-2022-08-26 14:09:31,905 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:31,908 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40373', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:31,908 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40373
-2022-08-26 14:09:31,908 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:31,909 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:41099', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:31,909 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:41099
-2022-08-26 14:09:31,909 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:31,909 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34533
-2022-08-26 14:09:31,909 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:31,910 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34533
-2022-08-26 14:09:31,910 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:31,910 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:31,910 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:31,924 - distributed.scheduler - INFO - Receive client connection: Client-61268b9b-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:31,924 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:31,946 - distributed.scheduler - INFO - Remove client Client-61268b9b-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:31,946 - distributed.scheduler - INFO - Remove client Client-61268b9b-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:31,946 - distributed.scheduler - INFO - Close client connection: Client-61268b9b-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:31,947 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:40373
-2022-08-26 14:09:31,947 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:41099
-2022-08-26 14:09:31,948 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:41099', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:31,948 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:41099
-2022-08-26 14:09:31,948 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-37138d38-844f-4d8a-aa68-0d621708299b Address tcp://127.0.0.1:41099 Status: Status.closing
-2022-08-26 14:09:31,949 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-96e19d49-71ac-40df-bde2-7eeb5b02343d Address tcp://127.0.0.1:40373 Status: Status.closing
-2022-08-26 14:09:31,949 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:40373', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:31,949 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:40373
-2022-08-26 14:09:31,949 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:09:31,950 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:31,950 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:32,157 - distributed.utils_perf - WARNING - full garbage collections took 72% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_deque_handler 2022-08-26 14:09:32,163 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:32,165 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:32,165 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:39189
-2022-08-26 14:09:32,165 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:35389
-2022-08-26 14:09:32,165 - distributed.scheduler - INFO - foo123
-2022-08-26 14:09:32,165 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:32,166 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:32,372 - distributed.utils_perf - WARNING - full garbage collections took 72% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_retries 2022-08-26 14:09:32,377 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:32,379 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:32,379 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:39617
-2022-08-26 14:09:32,379 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:36121
-2022-08-26 14:09:32,384 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43405
-2022-08-26 14:09:32,384 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43405
-2022-08-26 14:09:32,384 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:09:32,384 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40125
-2022-08-26 14:09:32,384 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39617
-2022-08-26 14:09:32,384 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:32,384 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:32,384 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:32,384 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-th_1pbzb
-2022-08-26 14:09:32,384 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:32,385 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36167
-2022-08-26 14:09:32,385 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36167
-2022-08-26 14:09:32,385 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:09:32,385 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34375
-2022-08-26 14:09:32,385 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39617
-2022-08-26 14:09:32,385 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:32,385 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:09:32,385 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:32,385 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ojr7wl04
-2022-08-26 14:09:32,385 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:32,388 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:43405', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:32,388 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:43405
-2022-08-26 14:09:32,388 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:32,389 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36167', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:32,389 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36167
-2022-08-26 14:09:32,389 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:32,389 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39617
-2022-08-26 14:09:32,389 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:32,389 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39617
-2022-08-26 14:09:32,390 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:32,390 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:32,390 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:32,404 - distributed.scheduler - INFO - Receive client connection: Client-616fc662-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:32,404 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:32,418 - distributed.worker - WARNING - Compute Failed
-Key:       func-60e6566ed76c5d81804e3ffd4288f297
-Function:  func
-args:      ()
-kwargs:    {}
-Exception: "ZeroDivisionError('one')"
-
-2022-08-26 14:09:32,420 - distributed.worker - WARNING - Compute Failed
-Key:       func-60e6566ed76c5d81804e3ffd4288f297
-Function:  func
-args:      ()
-kwargs:    {}
-Exception: "ZeroDivisionError('two')"
-
-2022-08-26 14:09:32,430 - distributed.worker - WARNING - Compute Failed
-Key:       func-feb7edad-be1d-464c-9cc8-52b1177f9fe2
-Function:  func
-args:      ()
-kwargs:    {}
-Exception: "ZeroDivisionError('one')"
-
-2022-08-26 14:09:32,432 - distributed.worker - WARNING - Compute Failed
-Key:       func-feb7edad-be1d-464c-9cc8-52b1177f9fe2
-Function:  func
-args:      ()
-kwargs:    {}
-Exception: "ZeroDivisionError('two')"
-
-2022-08-26 14:09:32,451 - distributed.worker - WARNING - Compute Failed
-Key:       func-c9acbcd3-47cc-4cd2-80db-a56d50d085ac
-Function:  func
-args:      ()
-kwargs:    {}
-Exception: "ZeroDivisionError('one')"
-
-2022-08-26 14:09:32,456 - distributed.worker - WARNING - Compute Failed
-Key:       func-c9acbcd3-47cc-4cd2-80db-a56d50d085ac
-Function:  func
-args:      ()
-kwargs:    {}
-Exception: "ZeroDivisionError('two')"
-
-2022-08-26 14:09:32,461 - distributed.worker - WARNING - Compute Failed
-Key:       func-d0c65c6b-4eae-4a6c-955a-6f5583aefde3
-Function:  func
-args:      ()
-kwargs:    {}
-Exception: "ZeroDivisionError('one')"
-
-2022-08-26 14:09:32,470 - distributed.scheduler - INFO - Remove client Client-616fc662-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:32,470 - distributed.scheduler - INFO - Remove client Client-616fc662-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:32,471 - distributed.scheduler - INFO - Close client connection: Client-616fc662-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:32,472 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:43405
-2022-08-26 14:09:32,472 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36167
-2022-08-26 14:09:32,473 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:43405', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:32,473 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:43405
-2022-08-26 14:09:32,473 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:36167', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:32,473 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:36167
-2022-08-26 14:09:32,473 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:09:32,473 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-ff33b541-9aab-4c1b-942b-bc97693eccf4 Address tcp://127.0.0.1:43405 Status: Status.closing
-2022-08-26 14:09:32,474 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-0302c4a0-af6d-4036-a63c-b203d5709f26 Address tcp://127.0.0.1:36167 Status: Status.closing
-2022-08-26 14:09:32,475 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:32,475 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:32,682 - distributed.utils_perf - WARNING - full garbage collections took 72% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_missing_data_errant_worker 2022-08-26 14:09:32,688 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:32,689 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:32,690 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:46765
-2022-08-26 14:09:32,690 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:33097
-2022-08-26 14:09:32,696 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36281
-2022-08-26 14:09:32,696 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36281
-2022-08-26 14:09:32,696 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:09:32,696 - distributed.worker - INFO -          dashboard at:            127.0.0.1:37515
-2022-08-26 14:09:32,696 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:46765
-2022-08-26 14:09:32,696 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:32,696 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:32,696 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:32,696 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-_uzkhtso
-2022-08-26 14:09:32,696 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:32,697 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:46471
-2022-08-26 14:09:32,697 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:46471
-2022-08-26 14:09:32,697 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:09:32,697 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41797
-2022-08-26 14:09:32,697 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:46765
-2022-08-26 14:09:32,697 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:32,697 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:32,697 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:32,697 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-h9wiuwnn
-2022-08-26 14:09:32,697 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:32,698 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36885
-2022-08-26 14:09:32,698 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36885
-2022-08-26 14:09:32,698 - distributed.worker - INFO -           Worker name:                          2
-2022-08-26 14:09:32,698 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39559
-2022-08-26 14:09:32,698 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:46765
-2022-08-26 14:09:32,698 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:32,698 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:32,698 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:32,698 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-96w__835
-2022-08-26 14:09:32,698 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:32,702 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36281', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:32,703 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36281
-2022-08-26 14:09:32,703 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:32,703 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:46471', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:32,703 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:46471
-2022-08-26 14:09:32,703 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:32,704 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36885', name: 2, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:32,704 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36885
-2022-08-26 14:09:32,704 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:32,704 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:46765
-2022-08-26 14:09:32,704 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:32,705 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:46765
-2022-08-26 14:09:32,705 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:32,705 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:46765
-2022-08-26 14:09:32,705 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:32,705 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:32,705 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:32,705 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:32,719 - distributed.scheduler - INFO - Receive client connection: Client-619ff046-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:32,719 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:32,833 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36281
-2022-08-26 14:09:32,835 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:36281', name: 0, status: closing, memory: 1, processing: 0>
-2022-08-26 14:09:32,835 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:36281
-2022-08-26 14:09:32,835 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-6cdbbeb7-77b6-446a-b157-a5bfa53dc553 Address tcp://127.0.0.1:36281 Status: Status.closing
-2022-08-26 14:09:32,838 - distributed.worker - ERROR - Worker stream died during communication: tcp://127.0.0.1:36281
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 235, in read
-    n = await stream.read_into(chunk)
-tornado.iostream.StreamClosedError: Stream is closed
-
-The above exception was the direct cause of the following exception:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 1992, in gather_dep
-    response = await get_data_from_worker(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 2731, in get_data_from_worker
-    return await retry_operation(_get_data, operation="get_data_from_worker")
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/utils_comm.py", line 383, in retry_operation
-    return await retry(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/utils_comm.py", line 368, in retry
-    return await coro()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 2711, in _get_data
-    response = await send_recv(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 919, in send_recv
-    response = await comm.read(deserializers=deserializers)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 241, in read
-    convert_stream_closed_error(self, e)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 144, in convert_stream_closed_error
-    raise CommClosedError(f"in {obj}: {exc}") from exc
-distributed.comm.core.CommClosedError: in <TCP (closed) Ephemeral Worker->Worker for gather local=tcp://127.0.0.1:42818 remote=tcp://127.0.0.1:36281>: Stream is closed
-2022-08-26 14:09:32,877 - distributed.scheduler - INFO - Remove client Client-619ff046-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:32,878 - distributed.scheduler - INFO - Remove client Client-619ff046-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:32,878 - distributed.scheduler - INFO - Close client connection: Client-619ff046-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:32,878 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:46471
-2022-08-26 14:09:32,879 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36885
-2022-08-26 14:09:32,880 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:46471', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:32,880 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:46471
-2022-08-26 14:09:32,880 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:36885', name: 2, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:32,880 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:36885
-2022-08-26 14:09:32,880 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:09:32,880 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-a542b3f7-bcb9-4ca4-a309-b795c683bf9f Address tcp://127.0.0.1:46471 Status: Status.closing
-2022-08-26 14:09:32,880 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-7ddaaf90-e645-414b-9772-bfba21c7e46d Address tcp://127.0.0.1:36885 Status: Status.closing
-2022-08-26 14:09:32,882 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:32,882 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:33,090 - distributed.utils_perf - WARNING - full garbage collections took 71% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_dont_recompute_if_persisted 2022-08-26 14:09:33,096 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:33,098 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:33,098 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:35355
-2022-08-26 14:09:33,098 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:34095
-2022-08-26 14:09:33,103 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:46409
-2022-08-26 14:09:33,103 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:46409
-2022-08-26 14:09:33,103 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:09:33,103 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39239
-2022-08-26 14:09:33,103 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35355
-2022-08-26 14:09:33,103 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:33,103 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:33,103 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:33,103 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-bpa7u570
-2022-08-26 14:09:33,103 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:33,104 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45487
-2022-08-26 14:09:33,104 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45487
-2022-08-26 14:09:33,104 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:09:33,104 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41687
-2022-08-26 14:09:33,104 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35355
-2022-08-26 14:09:33,104 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:33,104 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:09:33,104 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:33,104 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-b9ch4sx9
-2022-08-26 14:09:33,104 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:33,107 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:46409', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:33,107 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:46409
-2022-08-26 14:09:33,107 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:33,108 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45487', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:33,108 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45487
-2022-08-26 14:09:33,108 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:33,108 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35355
-2022-08-26 14:09:33,108 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:33,108 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35355
-2022-08-26 14:09:33,108 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:33,109 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:33,109 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:33,122 - distributed.scheduler - INFO - Receive client connection: Client-61dd75f3-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:33,123 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:33,253 - distributed.scheduler - INFO - Remove client Client-61dd75f3-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:33,253 - distributed.scheduler - INFO - Remove client Client-61dd75f3-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:33,253 - distributed.scheduler - INFO - Close client connection: Client-61dd75f3-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:33,254 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:46409
-2022-08-26 14:09:33,254 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:45487
-2022-08-26 14:09:33,255 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:46409', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:33,255 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:46409
-2022-08-26 14:09:33,255 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:45487', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:33,255 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:45487
-2022-08-26 14:09:33,255 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:09:33,256 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-e34537af-3475-4bf9-af7e-b70611d96dad Address tcp://127.0.0.1:46409 Status: Status.closing
-2022-08-26 14:09:33,256 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-cedc0fe9-7e6a-4eed-b70c-8d8af7498729 Address tcp://127.0.0.1:45487 Status: Status.closing
-2022-08-26 14:09:33,257 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:33,257 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:33,466 - distributed.utils_perf - WARNING - full garbage collections took 71% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_dont_recompute_if_persisted_2 2022-08-26 14:09:33,472 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:33,474 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:33,474 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:44511
-2022-08-26 14:09:33,474 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:44901
-2022-08-26 14:09:33,478 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45261
-2022-08-26 14:09:33,478 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45261
-2022-08-26 14:09:33,478 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:09:33,478 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45563
-2022-08-26 14:09:33,478 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44511
-2022-08-26 14:09:33,479 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:33,479 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:33,479 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:33,479 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-xlji2kb9
-2022-08-26 14:09:33,479 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:33,479 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34779
-2022-08-26 14:09:33,479 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34779
-2022-08-26 14:09:33,479 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:09:33,479 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36931
-2022-08-26 14:09:33,479 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44511
-2022-08-26 14:09:33,479 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:33,480 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:09:33,480 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:33,480 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-cpi6d7u5
-2022-08-26 14:09:33,480 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:33,483 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45261', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:33,483 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45261
-2022-08-26 14:09:33,483 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:33,483 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:34779', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:33,484 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:34779
-2022-08-26 14:09:33,484 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:33,484 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44511
-2022-08-26 14:09:33,484 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:33,484 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44511
-2022-08-26 14:09:33,484 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:33,484 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:33,485 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:33,498 - distributed.scheduler - INFO - Receive client connection: Client-6216ce92-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:33,498 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:33,637 - distributed.scheduler - INFO - Remove client Client-6216ce92-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:33,637 - distributed.scheduler - INFO - Remove client Client-6216ce92-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:33,637 - distributed.scheduler - INFO - Close client connection: Client-6216ce92-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:33,637 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:45261
-2022-08-26 14:09:33,638 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:34779
-2022-08-26 14:09:33,639 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:45261', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:33,639 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:45261
-2022-08-26 14:09:33,639 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:34779', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:33,639 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:34779
-2022-08-26 14:09:33,639 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:09:33,639 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-bd52a0c2-64d8-40c8-9c9a-2e15944fff14 Address tcp://127.0.0.1:45261 Status: Status.closing
-2022-08-26 14:09:33,639 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-df62bf15-25e0-42a4-9784-a8d056475493 Address tcp://127.0.0.1:34779 Status: Status.closing
-2022-08-26 14:09:33,640 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:33,641 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:33,849 - distributed.utils_perf - WARNING - full garbage collections took 71% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_dont_recompute_if_persisted_3 2022-08-26 14:09:33,855 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:33,857 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:33,857 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:34971
-2022-08-26 14:09:33,857 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:37293
-2022-08-26 14:09:33,861 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39045
-2022-08-26 14:09:33,861 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39045
-2022-08-26 14:09:33,862 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:09:33,862 - distributed.worker - INFO -          dashboard at:            127.0.0.1:38025
-2022-08-26 14:09:33,862 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34971
-2022-08-26 14:09:33,862 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:33,862 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:33,862 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:33,862 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-67r20bfq
-2022-08-26 14:09:33,862 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:33,862 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37641
-2022-08-26 14:09:33,862 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37641
-2022-08-26 14:09:33,862 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:09:33,863 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43889
-2022-08-26 14:09:33,863 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34971
-2022-08-26 14:09:33,863 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:33,863 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:09:33,863 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:33,863 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-_geqz31z
-2022-08-26 14:09:33,863 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:33,866 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39045', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:33,866 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39045
-2022-08-26 14:09:33,866 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:33,866 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37641', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:33,867 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37641
-2022-08-26 14:09:33,867 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:33,867 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34971
-2022-08-26 14:09:33,867 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:33,867 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34971
-2022-08-26 14:09:33,867 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:33,867 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:33,868 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:33,881 - distributed.scheduler - INFO - Receive client connection: Client-62513e35-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:33,881 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:34,025 - distributed.scheduler - INFO - Remove client Client-62513e35-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:34,025 - distributed.scheduler - INFO - Remove client Client-62513e35-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:34,025 - distributed.scheduler - INFO - Close client connection: Client-62513e35-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:34,025 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:39045
-2022-08-26 14:09:34,026 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:37641
-2022-08-26 14:09:34,027 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:39045', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:34,027 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:39045
-2022-08-26 14:09:34,027 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:37641', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:34,027 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:37641
-2022-08-26 14:09:34,027 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:09:34,027 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-5efea056-5016-49dd-b049-fd04c88a1538 Address tcp://127.0.0.1:39045 Status: Status.closing
-2022-08-26 14:09:34,028 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-c270401a-7fc6-4c1d-9571-295dd02f7d9b Address tcp://127.0.0.1:37641 Status: Status.closing
-2022-08-26 14:09:34,029 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:34,029 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:34,238 - distributed.utils_perf - WARNING - full garbage collections took 71% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_dont_recompute_if_persisted_4 2022-08-26 14:09:34,243 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:34,245 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:34,245 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:40991
-2022-08-26 14:09:34,245 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:42909
-2022-08-26 14:09:34,250 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45633
-2022-08-26 14:09:34,250 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45633
-2022-08-26 14:09:34,250 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:09:34,250 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35793
-2022-08-26 14:09:34,250 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:40991
-2022-08-26 14:09:34,250 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:34,250 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:34,250 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:34,250 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ljvvz4ux
-2022-08-26 14:09:34,250 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:34,251 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42297
-2022-08-26 14:09:34,251 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42297
-2022-08-26 14:09:34,251 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:09:34,251 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40101
-2022-08-26 14:09:34,251 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:40991
-2022-08-26 14:09:34,251 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:34,251 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:09:34,251 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:34,251 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-gylkmm3v
-2022-08-26 14:09:34,251 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:34,254 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45633', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:34,254 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45633
-2022-08-26 14:09:34,254 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:34,254 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42297', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:34,255 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42297
-2022-08-26 14:09:34,255 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:34,255 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:40991
-2022-08-26 14:09:34,255 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:34,255 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:40991
-2022-08-26 14:09:34,255 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:34,256 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:34,256 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:34,269 - distributed.scheduler - INFO - Receive client connection: Client-628c7988-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:34,270 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:34,302 - distributed.scheduler - INFO - Remove client Client-628c7988-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:34,302 - distributed.scheduler - INFO - Remove client Client-628c7988-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:34,302 - distributed.scheduler - INFO - Close client connection: Client-628c7988-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:34,303 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:45633
-2022-08-26 14:09:34,303 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42297
-2022-08-26 14:09:34,304 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:45633', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:34,304 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:45633
-2022-08-26 14:09:34,304 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:42297', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:34,305 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:42297
-2022-08-26 14:09:34,305 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:09:34,305 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-74c5cbe7-0c29-41ee-a615-5475faf8fc77 Address tcp://127.0.0.1:42297 Status: Status.closing
-2022-08-26 14:09:34,305 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-3f7c1e56-2802-44ea-a51c-adca876598f6 Address tcp://127.0.0.1:45633 Status: Status.closing
-2022-08-26 14:09:34,306 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:34,306 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:34,515 - distributed.utils_perf - WARNING - full garbage collections took 71% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_dont_forget_released_keys 2022-08-26 14:09:34,521 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:34,522 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:34,522 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:39653
-2022-08-26 14:09:34,523 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:34991
-2022-08-26 14:09:34,527 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45903
-2022-08-26 14:09:34,527 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45903
-2022-08-26 14:09:34,527 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:09:34,527 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43167
-2022-08-26 14:09:34,527 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39653
-2022-08-26 14:09:34,527 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:34,527 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:34,527 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:34,527 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-zw0xgdof
-2022-08-26 14:09:34,527 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:34,528 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35603
-2022-08-26 14:09:34,528 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35603
-2022-08-26 14:09:34,528 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:09:34,528 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34023
-2022-08-26 14:09:34,528 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39653
-2022-08-26 14:09:34,528 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:34,528 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:09:34,528 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:34,528 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-o8z3g1_o
-2022-08-26 14:09:34,528 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:34,531 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45903', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:34,531 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45903
-2022-08-26 14:09:34,531 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:34,532 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:35603', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:34,532 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:35603
-2022-08-26 14:09:34,532 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:34,532 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39653
-2022-08-26 14:09:34,532 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:34,533 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39653
-2022-08-26 14:09:34,533 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:34,533 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:34,533 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:34,547 - distributed.scheduler - INFO - Receive client connection: Client-62b6ca0f-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:34,547 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:34,590 - distributed.scheduler - INFO - Remove client Client-62b6ca0f-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:34,590 - distributed.scheduler - INFO - Remove client Client-62b6ca0f-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:34,590 - distributed.scheduler - INFO - Close client connection: Client-62b6ca0f-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:34,590 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:45903
-2022-08-26 14:09:34,591 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:35603
-2022-08-26 14:09:34,592 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:45903', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:34,592 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:45903
-2022-08-26 14:09:34,592 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:35603', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:34,592 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:35603
-2022-08-26 14:09:34,592 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:09:34,592 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-5e52d124-dcd2-4e9b-9af7-415490dc4b7b Address tcp://127.0.0.1:45903 Status: Status.closing
-2022-08-26 14:09:34,592 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-5f8267ea-2106-43c0-88ee-50fadc46d4c2 Address tcp://127.0.0.1:35603 Status: Status.closing
-2022-08-26 14:09:34,593 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:34,593 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:34,802 - distributed.utils_perf - WARNING - full garbage collections took 71% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_dont_recompute_if_erred 2022-08-26 14:09:34,808 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:34,810 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:34,810 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:39903
-2022-08-26 14:09:34,810 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:35193
-2022-08-26 14:09:34,814 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39265
-2022-08-26 14:09:34,814 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39265
-2022-08-26 14:09:34,814 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:09:34,814 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45313
-2022-08-26 14:09:34,814 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39903
-2022-08-26 14:09:34,814 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:34,814 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:34,815 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:34,815 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-5y8ggy0k
-2022-08-26 14:09:34,815 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:34,815 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43555
-2022-08-26 14:09:34,815 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43555
-2022-08-26 14:09:34,815 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:09:34,815 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45701
-2022-08-26 14:09:34,815 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39903
-2022-08-26 14:09:34,815 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:34,815 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:09:34,815 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:34,816 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-lf32uj7h
-2022-08-26 14:09:34,816 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:34,819 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39265', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:34,819 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39265
-2022-08-26 14:09:34,819 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:34,819 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:43555', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:34,820 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:43555
-2022-08-26 14:09:34,820 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:34,820 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39903
-2022-08-26 14:09:34,820 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:34,820 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39903
-2022-08-26 14:09:34,820 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:34,820 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:34,820 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:34,835 - distributed.scheduler - INFO - Receive client connection: Client-62e2a7ee-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:34,835 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:34,853 - distributed.worker - WARNING - Compute Failed
-Key:       y
-Function:  div
-args:      (2, 0)
-kwargs:    {}
-Exception: "ZeroDivisionError('division by zero')"
-
-2022-08-26 14:09:34,968 - distributed.scheduler - INFO - Remove client Client-62e2a7ee-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:34,968 - distributed.scheduler - INFO - Remove client Client-62e2a7ee-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:34,968 - distributed.scheduler - INFO - Close client connection: Client-62e2a7ee-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:34,969 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:39265
-2022-08-26 14:09:34,969 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:43555
-2022-08-26 14:09:34,970 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:39265', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:34,970 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:39265
-2022-08-26 14:09:34,970 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:43555', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:34,970 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:43555
-2022-08-26 14:09:34,970 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:09:34,970 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-e7fa505b-de95-463c-a9f3-81d71c57773f Address tcp://127.0.0.1:39265 Status: Status.closing
-2022-08-26 14:09:34,971 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-22c336f7-118e-4f61-be3c-ec8be3112b5e Address tcp://127.0.0.1:43555 Status: Status.closing
-2022-08-26 14:09:34,971 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:34,972 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:35,180 - distributed.utils_perf - WARNING - full garbage collections took 73% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_closing_scheduler_closes_workers 2022-08-26 14:09:35,186 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:35,188 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:35,188 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:40683
-2022-08-26 14:09:35,188 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:35741
-2022-08-26 14:09:35,192 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34581
-2022-08-26 14:09:35,192 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34581
-2022-08-26 14:09:35,192 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:09:35,192 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43481
-2022-08-26 14:09:35,192 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:40683
-2022-08-26 14:09:35,192 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:35,192 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:35,192 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:35,193 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-hjeih3wi
-2022-08-26 14:09:35,193 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:35,193 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37765
-2022-08-26 14:09:35,193 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37765
-2022-08-26 14:09:35,193 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:09:35,193 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39571
-2022-08-26 14:09:35,193 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:40683
-2022-08-26 14:09:35,193 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:35,193 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:09:35,193 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:35,193 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-0jg2h4w3
-2022-08-26 14:09:35,194 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:35,196 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:34581', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:35,197 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:34581
-2022-08-26 14:09:35,197 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:35,197 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37765', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:35,197 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37765
-2022-08-26 14:09:35,197 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:35,198 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:40683
-2022-08-26 14:09:35,198 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:35,198 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:40683
-2022-08-26 14:09:35,198 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:35,198 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:35,198 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:35,209 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:35,209 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:35,210 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:34581', name: 0, status: running, memory: 0, processing: 0>
-2022-08-26 14:09:35,210 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:34581
-2022-08-26 14:09:35,210 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:37765', name: 1, status: running, memory: 0, processing: 0>
-2022-08-26 14:09:35,210 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:37765
-2022-08-26 14:09:35,210 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:09:35,210 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:34581
-2022-08-26 14:09:35,211 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:37765
-2022-08-26 14:09:35,211 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-02a3e5ca-668e-404d-b6e9-da3f97123a57 Address tcp://127.0.0.1:34581 Status: Status.closing
-2022-08-26 14:09:35,211 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-9c808998-536f-40ca-9749-9fd53a2dfff0 Address tcp://127.0.0.1:37765 Status: Status.closing
-2022-08-26 14:09:35,211 - distributed.batched - INFO - Batched Comm Closed <TCP (closed) Worker->Scheduler local=tcp://127.0.0.1:60356 remote=tcp://127.0.0.1:40683>
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/batched.py", line 115, in _background_send
-    nbytes = yield coro
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/tornado/gen.py", line 769, in run
-    value = future.result()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 269, in write
-    raise CommClosedError()
-distributed.comm.core.CommClosedError
-2022-08-26 14:09:35,212 - distributed.batched - INFO - Batched Comm Closed <TCP (closed) Worker->Scheduler local=tcp://127.0.0.1:60360 remote=tcp://127.0.0.1:40683>
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/batched.py", line 115, in _background_send
-    nbytes = yield coro
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/tornado/gen.py", line 769, in run
-    value = future.result()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 269, in write
-    raise CommClosedError()
-distributed.comm.core.CommClosedError
-2022-08-26 14:09:35,431 - distributed.utils_perf - WARNING - full garbage collections took 75% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_resources_reset_after_cancelled_task 2022-08-26 14:09:35,437 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:35,438 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:35,438 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:35249
-2022-08-26 14:09:35,439 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:38935
-2022-08-26 14:09:35,441 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35257
-2022-08-26 14:09:35,441 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35257
-2022-08-26 14:09:35,441 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:09:35,441 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36157
-2022-08-26 14:09:35,441 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35249
-2022-08-26 14:09:35,441 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:35,441 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:35,442 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:35,442 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-21_whogd
-2022-08-26 14:09:35,442 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:35,443 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:35257', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:35,444 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:35257
-2022-08-26 14:09:35,444 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:35,444 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35249
-2022-08-26 14:09:35,444 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:35,444 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:35,458 - distributed.scheduler - INFO - Receive client connection: Client-6341cb37-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:35,458 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:35,482 - distributed.scheduler - INFO - Client Client-6341cb37-2583-11ed-a99d-00d861bc4509 requests to cancel 1 keys
-2022-08-26 14:09:35,482 - distributed.scheduler - INFO - Scheduler cancels key block-adbe69ca42723535feca8c8fd2d9e0df.  Force=False
-2022-08-26 14:09:35,505 - distributed.scheduler - INFO - Remove client Client-6341cb37-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:35,506 - distributed.scheduler - INFO - Remove client Client-6341cb37-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:35,506 - distributed.scheduler - INFO - Close client connection: Client-6341cb37-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:35,507 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:35257
-2022-08-26 14:09:35,508 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-c62dbdb9-47f5-4570-aae9-ff18d5c63479 Address tcp://127.0.0.1:35257 Status: Status.closing
-2022-08-26 14:09:35,508 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:35257', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:35,508 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:35257
-2022-08-26 14:09:35,508 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:09:35,509 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:35,509 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:35,717 - distributed.utils_perf - WARNING - full garbage collections took 78% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_gh2187 2022-08-26 14:09:35,723 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:35,725 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:35,725 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:45331
-2022-08-26 14:09:35,725 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:36853
-2022-08-26 14:09:35,729 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37637
-2022-08-26 14:09:35,729 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37637
-2022-08-26 14:09:35,729 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:09:35,729 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46207
-2022-08-26 14:09:35,729 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:45331
-2022-08-26 14:09:35,729 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:35,729 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:35,729 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:35,730 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-nl88187z
-2022-08-26 14:09:35,730 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:35,730 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38729
-2022-08-26 14:09:35,730 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38729
-2022-08-26 14:09:35,730 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:09:35,730 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39711
-2022-08-26 14:09:35,730 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:45331
-2022-08-26 14:09:35,730 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:35,730 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:09:35,730 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:35,730 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-14qabfln
-2022-08-26 14:09:35,731 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:35,734 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37637', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:35,734 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37637
-2022-08-26 14:09:35,734 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:35,734 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38729', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:35,734 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38729
-2022-08-26 14:09:35,735 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:35,735 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:45331
-2022-08-26 14:09:35,735 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:35,735 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:45331
-2022-08-26 14:09:35,735 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:35,735 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:35,735 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:35,749 - distributed.scheduler - INFO - Receive client connection: Client-636e4288-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:35,749 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:35,889 - distributed.scheduler - INFO - Remove client Client-636e4288-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:35,889 - distributed.scheduler - INFO - Remove client Client-636e4288-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:35,889 - distributed.scheduler - INFO - Close client connection: Client-636e4288-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:35,890 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:37637
-2022-08-26 14:09:35,891 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:38729
-2022-08-26 14:09:35,892 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:38729', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:35,892 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:38729
-2022-08-26 14:09:35,892 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-435d4809-008c-4ec8-b631-986a69379564 Address tcp://127.0.0.1:38729 Status: Status.closing
-2022-08-26 14:09:35,892 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-da34ffec-ecbe-45cb-9912-9bebfe0303b1 Address tcp://127.0.0.1:37637 Status: Status.closing
-2022-08-26 14:09:35,893 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:37637', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:35,893 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:37637
-2022-08-26 14:09:35,893 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:09:35,893 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:35,894 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:36,102 - distributed.utils_perf - WARNING - full garbage collections took 78% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_collect_versions 2022-08-26 14:09:36,108 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:36,110 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:36,110 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:37845
-2022-08-26 14:09:36,110 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:40015
-2022-08-26 14:09:36,114 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:44867
-2022-08-26 14:09:36,114 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:44867
-2022-08-26 14:09:36,114 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:09:36,114 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36231
-2022-08-26 14:09:36,114 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37845
-2022-08-26 14:09:36,114 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:36,115 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:36,115 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:36,115 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-b_mffyo5
-2022-08-26 14:09:36,115 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:36,115 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:32945
-2022-08-26 14:09:36,115 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:32945
-2022-08-26 14:09:36,115 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:09:36,115 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39059
-2022-08-26 14:09:36,115 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37845
-2022-08-26 14:09:36,115 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:36,115 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:09:36,115 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:36,116 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-r959edc4
-2022-08-26 14:09:36,116 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:36,118 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:44867', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:36,119 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:44867
-2022-08-26 14:09:36,119 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:36,119 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:32945', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:36,119 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:32945
-2022-08-26 14:09:36,119 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:36,120 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37845
-2022-08-26 14:09:36,120 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:36,120 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37845
-2022-08-26 14:09:36,120 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:36,120 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:36,120 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:36,134 - distributed.scheduler - INFO - Receive client connection: Client-63a8fca6-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:36,134 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:36,145 - distributed.scheduler - INFO - Remove client Client-63a8fca6-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:36,146 - distributed.scheduler - INFO - Remove client Client-63a8fca6-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:36,146 - distributed.scheduler - INFO - Close client connection: Client-63a8fca6-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:36,146 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:44867
-2022-08-26 14:09:36,147 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:32945
-2022-08-26 14:09:36,147 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:44867', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:36,147 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:44867
-2022-08-26 14:09:36,148 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:32945', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:36,148 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:32945
-2022-08-26 14:09:36,148 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:09:36,148 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-7d366e5b-2db3-4bb1-ac2a-cf3e5011dcd3 Address tcp://127.0.0.1:44867 Status: Status.closing
-2022-08-26 14:09:36,148 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-35af49dd-b5c5-4c3f-ae6a-e74386741f9c Address tcp://127.0.0.1:32945 Status: Status.closing
-2022-08-26 14:09:36,149 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:36,149 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:36,358 - distributed.utils_perf - WARNING - full garbage collections took 80% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_idle_timeout 2022-08-26 14:09:36,363 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:36,365 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:36,365 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:34115
-2022-08-26 14:09:36,365 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:44361
-2022-08-26 14:09:36,370 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:44107
-2022-08-26 14:09:36,370 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:44107
-2022-08-26 14:09:36,370 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:09:36,370 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40315
-2022-08-26 14:09:36,370 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34115
-2022-08-26 14:09:36,370 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:36,370 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:36,370 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:36,370 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-nge52m4f
-2022-08-26 14:09:36,370 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:36,371 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33419
-2022-08-26 14:09:36,371 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33419
-2022-08-26 14:09:36,371 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:09:36,371 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35245
-2022-08-26 14:09:36,371 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34115
-2022-08-26 14:09:36,371 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:36,371 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:09:36,371 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:36,371 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-f3db0_3_
-2022-08-26 14:09:36,371 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:36,374 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:44107', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:36,374 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:44107
-2022-08-26 14:09:36,374 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:36,375 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33419', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:36,375 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33419
-2022-08-26 14:09:36,375 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:36,375 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34115
-2022-08-26 14:09:36,375 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:36,375 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34115
-2022-08-26 14:09:36,376 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:36,376 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:36,376 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:36,389 - distributed.scheduler - INFO - Receive client connection: Client-63cffaa9-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:36,390 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:36,952 - distributed.scheduler - INFO - Scheduler closing after being idle for 500.00 ms
-2022-08-26 14:09:36,952 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:36,953 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:36,953 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:44107', name: 0, status: running, memory: 0, processing: 0>
-2022-08-26 14:09:36,954 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:44107
-2022-08-26 14:09:36,954 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:33419', name: 1, status: running, memory: 1, processing: 0>
-2022-08-26 14:09:36,954 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:33419
-2022-08-26 14:09:36,954 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:09:36,954 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:44107
-2022-08-26 14:09:36,954 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:33419
-2022-08-26 14:09:36,955 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-192a808e-4d8d-4d01-9050-8d2b49ad2bfe Address tcp://127.0.0.1:44107 Status: Status.closing
-2022-08-26 14:09:36,955 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-2784e760-ab42-4d11-8bac-32c4078317ee Address tcp://127.0.0.1:33419 Status: Status.closing
-2022-08-26 14:09:36,955 - distributed.batched - INFO - Batched Comm Closed <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:34115 remote=tcp://127.0.0.1:50502>
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/batched.py", line 115, in _background_send
-    nbytes = yield coro
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/tornado/gen.py", line 769, in run
-    value = future.result()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 269, in write
-    raise CommClosedError()
-distributed.comm.core.CommClosedError
-2022-08-26 14:09:36,955 - distributed.batched - INFO - Batched Comm Closed <TCP (closed) Worker->Scheduler local=tcp://127.0.0.1:50484 remote=tcp://127.0.0.1:34115>
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/batched.py", line 115, in _background_send
-    nbytes = yield coro
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/tornado/gen.py", line 769, in run
-    value = future.result()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 269, in write
-    raise CommClosedError()
-distributed.comm.core.CommClosedError
-2022-08-26 14:09:36,956 - distributed.batched - INFO - Batched Comm Closed <TCP (closed) Worker->Scheduler local=tcp://127.0.0.1:50498 remote=tcp://127.0.0.1:34115>
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/batched.py", line 115, in _background_send
-    nbytes = yield coro
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/tornado/gen.py", line 769, in run
-    value = future.result()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 269, in write
-    raise CommClosedError()
-distributed.comm.core.CommClosedError
-2022-08-26 14:09:37,060 - distributed.client - ERROR - 
-ConnectionRefusedError: [Errno 111] Connection refused
-
-The above exception was the direct cause of the following exception:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/core.py", line 291, in connect
-    comm = await asyncio.wait_for(
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/tasks.py", line 445, in wait_for
-    return fut.result()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 496, in connect
-    convert_stream_closed_error(self, e)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 142, in convert_stream_closed_error
-    raise CommClosedError(f"in {obj}: {exc.__class__.__name__}: {exc}") from exc
-distributed.comm.core.CommClosedError: in <distributed.comm.tcp.TCPConnector object at 0x56403e7285d0>: ConnectionRefusedError: [Errno 111] Connection refused
-
-During handling of the above exception, another exception occurred:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/utils.py", line 799, in wrapper
-    return await func(*args, **kwargs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/client.py", line 1246, in _reconnect
-    await self._ensure_connected(timeout=timeout)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/client.py", line 1276, in _ensure_connected
-    comm = await connect(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/core.py", line 315, in connect
-    await asyncio.sleep(backoff)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/tasks.py", line 605, in sleep
-    return await future
-asyncio.exceptions.CancelledError
-2022-08-26 14:09:37,270 - distributed.utils_perf - WARNING - full garbage collections took 79% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_idle_timeout_no_workers 2022-08-26 14:09:37,276 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:37,278 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:37,278 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:36045
-2022-08-26 14:09:37,278 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:37239
-2022-08-26 14:09:37,281 - distributed.scheduler - INFO - Receive client connection: Client-64580899-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:37,281 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:37,487 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35997
-2022-08-26 14:09:37,487 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35997
-2022-08-26 14:09:37,487 - distributed.worker - INFO -          dashboard at:            127.0.0.1:37099
-2022-08-26 14:09:37,487 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36045
-2022-08-26 14:09:37,487 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:37,487 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:09:37,487 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:37,487 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-8vgtrq7z
-2022-08-26 14:09:37,487 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:37,489 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:35997', status: init, memory: 0, processing: 0>
-2022-08-26 14:09:37,489 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:35997
-2022-08-26 14:09:37,489 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:37,490 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36045
-2022-08-26 14:09:37,490 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:37,490 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:37,495 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:35997
-2022-08-26 14:09:37,496 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:35997', status: closing, memory: 1, processing: 0>
-2022-08-26 14:09:37,496 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:35997
-2022-08-26 14:09:37,496 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:09:37,496 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-470bd5b5-7c30-4274-89ea-c41765cb5a0d Address tcp://127.0.0.1:35997 Status: Status.closing
-2022-08-26 14:09:37,600 - distributed.scheduler - INFO - Remove client Client-64580899-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:37,601 - distributed.scheduler - INFO - Remove client Client-64580899-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:37,601 - distributed.scheduler - INFO - Close client connection: Client-64580899-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:37,601 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:37,602 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:37,810 - distributed.utils_perf - WARNING - full garbage collections took 79% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_bandwidth 2022-08-26 14:09:37,816 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:37,817 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:37,818 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:46135
-2022-08-26 14:09:37,818 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:39337
-2022-08-26 14:09:37,822 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43771
-2022-08-26 14:09:37,822 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43771
-2022-08-26 14:09:37,822 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:09:37,822 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45643
-2022-08-26 14:09:37,822 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:46135
-2022-08-26 14:09:37,822 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:37,822 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:37,822 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:37,822 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-zk736pve
-2022-08-26 14:09:37,822 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:37,823 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:41837
-2022-08-26 14:09:37,823 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:41837
-2022-08-26 14:09:37,823 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:09:37,823 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40979
-2022-08-26 14:09:37,823 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:46135
-2022-08-26 14:09:37,823 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:37,823 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:09:37,823 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:37,823 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-5jz_7bfm
-2022-08-26 14:09:37,823 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:37,826 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:43771', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:37,827 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:43771
-2022-08-26 14:09:37,827 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:37,827 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:41837', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:37,827 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:41837
-2022-08-26 14:09:37,827 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:37,828 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:46135
-2022-08-26 14:09:37,828 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:37,828 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:46135
-2022-08-26 14:09:37,828 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:37,828 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:37,828 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:37,842 - distributed.scheduler - INFO - Receive client connection: Client-64ad9653-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:37,842 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:37,865 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:43771
-2022-08-26 14:09:37,866 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:43771', name: 0, status: closing, memory: 1, processing: 0>
-2022-08-26 14:09:37,866 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:43771
-2022-08-26 14:09:37,866 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-55ffbdfd-c69e-41f4-b66b-4f73022efdd1 Address tcp://127.0.0.1:43771 Status: Status.closing
-2022-08-26 14:09:37,878 - distributed.scheduler - INFO - Remove client Client-64ad9653-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:37,878 - distributed.scheduler - INFO - Remove client Client-64ad9653-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:37,878 - distributed.scheduler - INFO - Close client connection: Client-64ad9653-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:37,878 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:41837
-2022-08-26 14:09:37,879 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:41837', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:37,879 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:41837
-2022-08-26 14:09:37,879 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:09:37,879 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-6ffe21f4-de57-48dd-a543-65207cccef91 Address tcp://127.0.0.1:41837 Status: Status.closing
-2022-08-26 14:09:37,880 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:37,880 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:38,089 - distributed.utils_perf - WARNING - full garbage collections took 80% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_bandwidth_clear 2022-08-26 14:09:38,095 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:38,096 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:38,096 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:35963
-2022-08-26 14:09:38,096 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:36825
-2022-08-26 14:09:38,102 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:41795'
-2022-08-26 14:09:38,102 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:33045'
-2022-08-26 14:09:38,788 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39551
-2022-08-26 14:09:38,788 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39551
-2022-08-26 14:09:38,788 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:09:38,788 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34189
-2022-08-26 14:09:38,788 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35963
-2022-08-26 14:09:38,788 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:38,788 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:09:38,788 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:38,789 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ulr4gtom
-2022-08-26 14:09:38,789 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:38,791 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39491
-2022-08-26 14:09:38,792 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39491
-2022-08-26 14:09:38,792 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:09:38,792 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43201
-2022-08-26 14:09:38,792 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35963
-2022-08-26 14:09:38,792 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:38,792 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:38,792 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:38,792 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-zvhej0og
-2022-08-26 14:09:38,792 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:39,064 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39491', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:39,064 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39491
-2022-08-26 14:09:39,065 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:39,065 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35963
-2022-08-26 14:09:39,065 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:39,065 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:39,078 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39551', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:39,078 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39551
-2022-08-26 14:09:39,078 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:39,078 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35963
-2022-08-26 14:09:39,078 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:39,079 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:39,094 - distributed.scheduler - INFO - Receive client connection: Client-656c9d15-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:39,094 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:39,163 - distributed.worker - INFO - Run out-of-band function 'f'
-2022-08-26 14:09:39,164 - distributed.worker - INFO - Run out-of-band function 'f'
-2022-08-26 14:09:39,166 - distributed.scheduler - INFO - Releasing all requested keys
-2022-08-26 14:09:39,166 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:39,169 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:09:39,170 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:09:39,170 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:39491
-2022-08-26 14:09:39,170 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:39551
-2022-08-26 14:09:39,171 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-3c5e3f1b-faae-478a-b3fe-48f72c6a8f2a Address tcp://127.0.0.1:39491 Status: Status.closing
-2022-08-26 14:09:39,171 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:39491', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:39,171 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:39491
-2022-08-26 14:09:39,171 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-22121ae4-8fa1-4a47-89a8-1d63187927b8 Address tcp://127.0.0.1:39551 Status: Status.closing
-2022-08-26 14:09:39,171 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:39551', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:39,171 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:39551
-2022-08-26 14:09:39,171 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:09:39,301 - distributed.nanny - WARNING - Restarting worker
-2022-08-26 14:09:39,302 - distributed.nanny - WARNING - Restarting worker
-2022-08-26 14:09:39,994 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34001
-2022-08-26 14:09:39,995 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34001
-2022-08-26 14:09:39,995 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:09:39,995 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34817
-2022-08-26 14:09:39,995 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35963
-2022-08-26 14:09:39,995 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:39,995 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:39,995 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:39,995 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-6o2ndhm3
-2022-08-26 14:09:39,995 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:39,996 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39113
-2022-08-26 14:09:39,996 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39113
-2022-08-26 14:09:39,996 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:09:39,996 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43707
-2022-08-26 14:09:39,996 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35963
-2022-08-26 14:09:39,996 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:39,996 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:09:39,996 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:39,996 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-39cdhp1b
-2022-08-26 14:09:39,996 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:40,270 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39113', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:40,271 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39113
-2022-08-26 14:09:40,271 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:40,271 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35963
-2022-08-26 14:09:40,271 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:40,272 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:40,287 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:34001', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:40,287 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:34001
-2022-08-26 14:09:40,287 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:40,287 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35963
-2022-08-26 14:09:40,287 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:40,288 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:40,309 - distributed.scheduler - INFO - Remove client Client-656c9d15-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:40,310 - distributed.scheduler - INFO - Remove client Client-656c9d15-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:40,310 - distributed.scheduler - INFO - Close client connection: Client-656c9d15-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:40,310 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:41795'.
-2022-08-26 14:09:40,310 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:09:40,311 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:33045'.
-2022-08-26 14:09:40,311 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:09:40,311 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:34001
-2022-08-26 14:09:40,311 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:39113
-2022-08-26 14:09:40,312 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-256465ec-4606-467d-94ed-3521325b5bb8 Address tcp://127.0.0.1:34001 Status: Status.closing
-2022-08-26 14:09:40,312 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:34001', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:40,312 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:34001
-2022-08-26 14:09:40,312 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-40ff7d55-5911-419c-a1e0-ff715b44cbef Address tcp://127.0.0.1:39113 Status: Status.closing
-2022-08-26 14:09:40,312 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:39113', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:40,312 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:39113
-2022-08-26 14:09:40,312 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:09:40,441 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:40,441 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:40,650 - distributed.utils_perf - WARNING - full garbage collections took 79% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_workerstate_clean 2022-08-26 14:09:40,656 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:40,658 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:40,658 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:39039
-2022-08-26 14:09:40,658 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:34967
-2022-08-26 14:09:40,662 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45757
-2022-08-26 14:09:40,662 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45757
-2022-08-26 14:09:40,662 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:09:40,662 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42775
-2022-08-26 14:09:40,662 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39039
-2022-08-26 14:09:40,663 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:40,663 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:40,663 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:40,663 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-nzojiov4
-2022-08-26 14:09:40,663 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:40,663 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:32871
-2022-08-26 14:09:40,663 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:32871
-2022-08-26 14:09:40,663 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:09:40,663 - distributed.worker - INFO -          dashboard at:            127.0.0.1:38777
-2022-08-26 14:09:40,663 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39039
-2022-08-26 14:09:40,663 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:40,663 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:09:40,664 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:40,664 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-yrao7rjh
-2022-08-26 14:09:40,664 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:40,666 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45757', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:40,667 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45757
-2022-08-26 14:09:40,667 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:40,667 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:32871', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:40,667 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:32871
-2022-08-26 14:09:40,667 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:40,668 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39039
-2022-08-26 14:09:40,668 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:40,668 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39039
-2022-08-26 14:09:40,668 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:40,668 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:40,668 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:40,679 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:45757
-2022-08-26 14:09:40,680 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:32871
-2022-08-26 14:09:40,681 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:45757', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:40,681 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:45757
-2022-08-26 14:09:40,681 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:32871', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:40,681 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:32871
-2022-08-26 14:09:40,681 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:09:40,681 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-535c6fed-5aea-4808-9d09-e84c0b2f1def Address tcp://127.0.0.1:45757 Status: Status.closing
-2022-08-26 14:09:40,681 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-978d9ac6-0ddc-41bb-9c66-00e58d8b63b9 Address tcp://127.0.0.1:32871 Status: Status.closing
-2022-08-26 14:09:40,682 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:40,682 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:40,891 - distributed.utils_perf - WARNING - full garbage collections took 79% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_result_type 2022-08-26 14:09:40,896 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:40,898 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:40,898 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:38885
-2022-08-26 14:09:40,898 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:44043
-2022-08-26 14:09:40,903 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39919
-2022-08-26 14:09:40,903 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39919
-2022-08-26 14:09:40,903 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:09:40,903 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46725
-2022-08-26 14:09:40,903 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:38885
-2022-08-26 14:09:40,903 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:40,903 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:40,903 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:40,903 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-85bkrm5r
-2022-08-26 14:09:40,903 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:40,904 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40703
-2022-08-26 14:09:40,904 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40703
-2022-08-26 14:09:40,904 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:09:40,904 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36387
-2022-08-26 14:09:40,904 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:38885
-2022-08-26 14:09:40,904 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:40,904 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:09:40,904 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:40,904 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-be1g2qdc
-2022-08-26 14:09:40,904 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:40,907 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39919', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:40,907 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39919
-2022-08-26 14:09:40,907 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:40,908 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40703', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:40,908 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40703
-2022-08-26 14:09:40,908 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:40,908 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:38885
-2022-08-26 14:09:40,908 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:40,909 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:38885
-2022-08-26 14:09:40,909 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:40,909 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:40,909 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:40,923 - distributed.scheduler - INFO - Receive client connection: Client-6683ae05-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:40,923 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:40,944 - distributed.scheduler - INFO - Remove client Client-6683ae05-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:40,944 - distributed.scheduler - INFO - Remove client Client-6683ae05-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:40,945 - distributed.scheduler - INFO - Close client connection: Client-6683ae05-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:40,946 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:39919
-2022-08-26 14:09:40,946 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:40703
-2022-08-26 14:09:40,947 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:40703', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:40,947 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:40703
-2022-08-26 14:09:40,947 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-b04aff02-8ad4-4d64-ada1-7a61b251c653 Address tcp://127.0.0.1:40703 Status: Status.closing
-2022-08-26 14:09:40,947 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-97fa56b4-3421-4a1a-8a0c-7847469d74de Address tcp://127.0.0.1:39919 Status: Status.closing
-2022-08-26 14:09:40,948 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:39919', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:40,948 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:39919
-2022-08-26 14:09:40,948 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:09:40,949 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:40,949 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:41,157 - distributed.utils_perf - WARNING - full garbage collections took 79% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_close_workers 2022-08-26 14:09:41,163 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:41,164 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:41,165 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:42021
-2022-08-26 14:09:41,165 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:36999
-2022-08-26 14:09:41,169 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:44181
-2022-08-26 14:09:41,169 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:44181
-2022-08-26 14:09:41,169 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:09:41,169 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43991
-2022-08-26 14:09:41,169 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:42021
-2022-08-26 14:09:41,169 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:41,169 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:41,169 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:41,170 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-zvd19jqz
-2022-08-26 14:09:41,170 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:41,170 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40077
-2022-08-26 14:09:41,170 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40077
-2022-08-26 14:09:41,170 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:09:41,170 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46571
-2022-08-26 14:09:41,170 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:42021
-2022-08-26 14:09:41,170 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:41,170 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:09:41,170 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:41,170 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-i9s1tgow
-2022-08-26 14:09:41,171 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:41,173 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:44181', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:41,174 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:44181
-2022-08-26 14:09:41,174 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:41,174 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40077', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:41,174 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40077
-2022-08-26 14:09:41,174 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:41,175 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:42021
-2022-08-26 14:09:41,175 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:41,175 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:42021
-2022-08-26 14:09:41,175 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:41,175 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:41,175 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:41,186 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:41,186 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:41,187 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:44181', name: 0, status: running, memory: 0, processing: 0>
-2022-08-26 14:09:41,187 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:44181
-2022-08-26 14:09:41,187 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:40077', name: 1, status: running, memory: 0, processing: 0>
-2022-08-26 14:09:41,187 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:40077
-2022-08-26 14:09:41,187 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:09:41,187 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:44181
-2022-08-26 14:09:41,188 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:40077
-2022-08-26 14:09:41,188 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-3dce2eca-e9e1-488b-b282-e82a35e5058a Address tcp://127.0.0.1:44181 Status: Status.closing
-2022-08-26 14:09:41,188 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-e717e20f-0977-46ae-bc99-25b480d44545 Address tcp://127.0.0.1:40077 Status: Status.closing
-2022-08-26 14:09:41,188 - distributed.batched - INFO - Batched Comm Closed <TCP (closed) Worker->Scheduler local=tcp://127.0.0.1:52552 remote=tcp://127.0.0.1:42021>
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/batched.py", line 115, in _background_send
-    nbytes = yield coro
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/tornado/gen.py", line 769, in run
-    value = future.result()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 269, in write
-    raise CommClosedError()
-distributed.comm.core.CommClosedError
-2022-08-26 14:09:41,189 - distributed.batched - INFO - Batched Comm Closed <TCP (closed) Worker->Scheduler local=tcp://127.0.0.1:52560 remote=tcp://127.0.0.1:42021>
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/batched.py", line 115, in _background_send
-    nbytes = yield coro
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/tornado/gen.py", line 769, in run
-    value = future.result()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 269, in write
-    raise CommClosedError()
-distributed.comm.core.CommClosedError
-2022-08-26 14:09:41,496 - distributed.utils_perf - WARNING - full garbage collections took 79% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_host_address 2022-08-26 14:09:41,522 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:41,523 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:41,523 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.2:44125
-2022-08-26 14:09:41,523 - distributed.scheduler - INFO -   dashboard at:           127.0.0.2:39081
-2022-08-26 14:09:41,524 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:41,524 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_scheduler.py::test_dashboard_address 2022-08-26 14:09:41,548 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:41,550 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:41,550 - distributed.scheduler - INFO -   Scheduler at: tcp://192.168.1.159:33811
-2022-08-26 14:09:41,550 - distributed.scheduler - INFO -   dashboard at:                     :8901
-2022-08-26 14:09:41,550 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:41,551 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:41,571 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:41,573 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:41,573 - distributed.scheduler - INFO -   Scheduler at: tcp://192.168.1.159:39341
-2022-08-26 14:09:41,573 - distributed.scheduler - INFO -   dashboard at:                    :32991
-2022-08-26 14:09:41,573 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:41,573 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:41,593 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:41,595 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:41,595 - distributed.scheduler - INFO -   Scheduler at: tcp://192.168.1.159:42479
-2022-08-26 14:09:41,595 - distributed.scheduler - INFO -   dashboard at:                     :8901
-2022-08-26 14:09:41,595 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:41,596 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:41,616 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:41,617 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:41,618 - distributed.scheduler - INFO -   Scheduler at: tcp://192.168.1.159:40889
-2022-08-26 14:09:41,618 - distributed.scheduler - INFO -   dashboard at:                     :8901
-2022-08-26 14:09:41,618 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:41,618 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:41,638 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:41,640 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:41,640 - distributed.scheduler - INFO -   Scheduler at: tcp://192.168.1.159:39735
-2022-08-26 14:09:41,640 - distributed.scheduler - INFO -   dashboard at:                     :8901
-2022-08-26 14:09:41,640 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:41,641 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_scheduler.py::test_adaptive_target 2022-08-26 14:09:41,646 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:41,647 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:41,647 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:35591
-2022-08-26 14:09:41,647 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:37133
-2022-08-26 14:09:41,652 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34731
-2022-08-26 14:09:41,652 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34731
-2022-08-26 14:09:41,652 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:09:41,652 - distributed.worker - INFO -          dashboard at:            127.0.0.1:32865
-2022-08-26 14:09:41,652 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35591
-2022-08-26 14:09:41,652 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:41,652 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:41,652 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:41,652 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-k1zaae2q
-2022-08-26 14:09:41,652 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:41,653 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35521
-2022-08-26 14:09:41,653 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35521
-2022-08-26 14:09:41,653 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:09:41,653 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41683
-2022-08-26 14:09:41,653 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35591
-2022-08-26 14:09:41,653 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:41,653 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:09:41,653 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:41,653 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-glkkf1al
-2022-08-26 14:09:41,653 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:41,656 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:34731', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:41,656 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:34731
-2022-08-26 14:09:41,657 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:41,657 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:35521', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:41,657 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:35521
-2022-08-26 14:09:41,657 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:41,657 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35591
-2022-08-26 14:09:41,658 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:41,658 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35591
-2022-08-26 14:09:41,658 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:41,658 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:41,658 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:41,672 - distributed.scheduler - INFO - Receive client connection: Client-66f5ff68-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:41,672 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:41,743 - distributed.scheduler - INFO - Remove client Client-66f5ff68-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:41,743 - distributed.scheduler - INFO - Remove client Client-66f5ff68-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:41,744 - distributed.scheduler - INFO - Close client connection: Client-66f5ff68-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:41,744 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:34731
-2022-08-26 14:09:41,744 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:35521
-2022-08-26 14:09:41,746 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-c26d6b5b-d714-415e-a114-7d8096f5bcc7 Address tcp://127.0.0.1:34731 Status: Status.closing
-2022-08-26 14:09:41,746 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-75a66559-07f7-42d0-96d2-9d963966d33c Address tcp://127.0.0.1:35521 Status: Status.closing
-2022-08-26 14:09:41,746 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:34731', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:41,747 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:34731
-2022-08-26 14:09:41,747 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:35521', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:41,747 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:35521
-2022-08-26 14:09:41,747 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:09:42,235 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:42,236 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:42,448 - distributed.utils_perf - WARNING - full garbage collections took 77% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_async_context_manager 2022-08-26 14:09:42,473 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:42,475 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:42,475 - distributed.scheduler - INFO -   Scheduler at: tcp://192.168.1.159:45255
-2022-08-26 14:09:42,475 - distributed.scheduler - INFO -   dashboard at:                    :41603
-2022-08-26 14:09:42,478 - distributed.worker - INFO -       Start worker at:  tcp://192.168.1.159:40431
-2022-08-26 14:09:42,478 - distributed.worker - INFO -          Listening to:  tcp://192.168.1.159:40431
-2022-08-26 14:09:42,478 - distributed.worker - INFO -          dashboard at:        192.168.1.159:39185
-2022-08-26 14:09:42,478 - distributed.worker - INFO - Waiting to connect to:  tcp://192.168.1.159:45255
-2022-08-26 14:09:42,478 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:42,478 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:09:42,478 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:42,478 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-a85yrg9k
-2022-08-26 14:09:42,478 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:42,480 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://192.168.1.159:40431', status: init, memory: 0, processing: 0>
-2022-08-26 14:09:42,480 - distributed.scheduler - INFO - Starting worker compute stream, tcp://192.168.1.159:40431
-2022-08-26 14:09:42,480 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:42,481 - distributed.worker - INFO -         Registered to:  tcp://192.168.1.159:45255
-2022-08-26 14:09:42,481 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:42,481 - distributed.worker - INFO - Stopping worker at tcp://192.168.1.159:40431
-2022-08-26 14:09:42,481 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:42,481 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-5fbb02be-de1d-49a6-a07b-f3f497318bfe Address tcp://192.168.1.159:40431 Status: Status.closing
-2022-08-26 14:09:42,482 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://192.168.1.159:40431', status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:42,482 - distributed.core - INFO - Removing comms to tcp://192.168.1.159:40431
-2022-08-26 14:09:42,482 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:09:42,483 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:42,483 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_scheduler.py::test_allowed_failures_config 2022-08-26 14:09:42,508 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:42,509 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:42,511 - distributed.scheduler - INFO -   Scheduler at: tcp://192.168.1.159:39727
-2022-08-26 14:09:42,511 - distributed.scheduler - INFO -   dashboard at:                    :37463
-2022-08-26 14:09:42,511 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:42,511 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:42,531 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:42,533 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:42,533 - distributed.scheduler - INFO -   Scheduler at: tcp://192.168.1.159:45817
-2022-08-26 14:09:42,533 - distributed.scheduler - INFO -   dashboard at:                    :44195
-2022-08-26 14:09:42,534 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:42,534 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:42,554 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:42,555 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:42,555 - distributed.scheduler - INFO -   Scheduler at: tcp://192.168.1.159:37033
-2022-08-26 14:09:42,555 - distributed.scheduler - INFO -   dashboard at:                    :37385
-2022-08-26 14:09:42,556 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:42,556 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_scheduler.py::test_finished 2022-08-26 14:09:42,582 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:42,583 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:42,583 - distributed.scheduler - INFO -   Scheduler at: tcp://192.168.1.159:42197
-2022-08-26 14:09:42,584 - distributed.scheduler - INFO -   dashboard at:                    :43237
-2022-08-26 14:09:42,586 - distributed.worker - INFO -       Start worker at:  tcp://192.168.1.159:34777
-2022-08-26 14:09:42,586 - distributed.worker - INFO -          Listening to:  tcp://192.168.1.159:34777
-2022-08-26 14:09:42,586 - distributed.worker - INFO -          dashboard at:        192.168.1.159:32897
-2022-08-26 14:09:42,586 - distributed.worker - INFO - Waiting to connect to:  tcp://192.168.1.159:42197
-2022-08-26 14:09:42,587 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:42,587 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:09:42,587 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:42,587 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-f7jqtcpz
-2022-08-26 14:09:42,587 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:42,589 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://192.168.1.159:34777', status: init, memory: 0, processing: 0>
-2022-08-26 14:09:42,589 - distributed.scheduler - INFO - Starting worker compute stream, tcp://192.168.1.159:34777
-2022-08-26 14:09:42,589 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:42,589 - distributed.worker - INFO -         Registered to:  tcp://192.168.1.159:42197
-2022-08-26 14:09:42,589 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:42,589 - distributed.worker - INFO - Stopping worker at tcp://192.168.1.159:34777
-2022-08-26 14:09:42,590 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:42,590 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-bc03f685-20e7-4ed5-84be-48f93aeeb2d6 Address tcp://192.168.1.159:34777 Status: Status.closing
-2022-08-26 14:09:42,591 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://192.168.1.159:34777', status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:42,591 - distributed.core - INFO - Removing comms to tcp://192.168.1.159:34777
-2022-08-26 14:09:42,591 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:09:42,591 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:42,591 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_scheduler.py::test_retire_names_str 2022-08-26 14:09:42,597 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:42,598 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:42,599 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:36685
-2022-08-26 14:09:42,599 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:44155
-2022-08-26 14:09:42,602 - distributed.scheduler - INFO - Receive client connection: Client-6783e318-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:42,602 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:42,605 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36379
-2022-08-26 14:09:42,605 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36379
-2022-08-26 14:09:42,605 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:09:42,605 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45009
-2022-08-26 14:09:42,605 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36685
-2022-08-26 14:09:42,605 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:42,605 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:09:42,605 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:42,605 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-j4gue5bd
-2022-08-26 14:09:42,605 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:42,607 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36379', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:42,608 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36379
-2022-08-26 14:09:42,608 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:42,608 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36685
-2022-08-26 14:09:42,608 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:42,610 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42543
-2022-08-26 14:09:42,611 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42543
-2022-08-26 14:09:42,611 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:09:42,611 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43935
-2022-08-26 14:09:42,611 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36685
-2022-08-26 14:09:42,611 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:42,611 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:09:42,611 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:42,611 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-oo0shefm
-2022-08-26 14:09:42,611 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:42,611 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:42,613 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42543', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:42,613 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42543
-2022-08-26 14:09:42,614 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:42,614 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36685
-2022-08-26 14:09:42,614 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:42,616 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:42,625 - distributed.scheduler - INFO - Retire worker names [0]
-2022-08-26 14:09:42,625 - distributed.scheduler - INFO - Retiring worker tcp://127.0.0.1:36379
-2022-08-26 14:09:42,625 - distributed.active_memory_manager - INFO - Retiring worker tcp://127.0.0.1:36379; 5 keys are being moved away.
-2022-08-26 14:09:42,638 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:36379', name: 0, status: closing_gracefully, memory: 5, processing: 0>
-2022-08-26 14:09:42,638 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:36379
-2022-08-26 14:09:42,638 - distributed.scheduler - INFO - Retired worker tcp://127.0.0.1:36379
-2022-08-26 14:09:42,638 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42543
-2022-08-26 14:09:42,639 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:42543', name: 1, status: closing, memory: 10, processing: 0>
-2022-08-26 14:09:42,639 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:42543
-2022-08-26 14:09:42,640 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:09:42,640 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-79d5bab8-bdbb-4a16-9357-04142517f201 Address tcp://127.0.0.1:42543 Status: Status.closing
-2022-08-26 14:09:42,641 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36379
-2022-08-26 14:09:42,642 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-c2d0bf85-2768-4841-9e55-a19e1eb7e99f Address tcp://127.0.0.1:36379 Status: Status.closing
-2022-08-26 14:09:42,654 - distributed.scheduler - INFO - Remove client Client-6783e318-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:42,654 - distributed.scheduler - INFO - Remove client Client-6783e318-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:42,654 - distributed.scheduler - INFO - Close client connection: Client-6783e318-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:42,655 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:42,655 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:42,866 - distributed.utils_perf - WARNING - full garbage collections took 76% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_get_task_duration 2022-08-26 14:09:42,872 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:42,873 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:42,874 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:34515
-2022-08-26 14:09:42,874 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:46679
-2022-08-26 14:09:42,878 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34153
-2022-08-26 14:09:42,878 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34153
-2022-08-26 14:09:42,878 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:09:42,878 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45019
-2022-08-26 14:09:42,878 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34515
-2022-08-26 14:09:42,878 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:42,878 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:42,878 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:42,879 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-odp4t67p
-2022-08-26 14:09:42,879 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:42,879 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:44677
-2022-08-26 14:09:42,879 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:44677
-2022-08-26 14:09:42,879 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:09:42,879 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45947
-2022-08-26 14:09:42,879 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34515
-2022-08-26 14:09:42,879 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:42,879 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:09:42,879 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:42,880 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-4m07jjzc
-2022-08-26 14:09:42,880 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:42,882 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:34153', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:42,883 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:34153
-2022-08-26 14:09:42,883 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:42,883 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:44677', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:42,883 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:44677
-2022-08-26 14:09:42,883 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:42,884 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34515
-2022-08-26 14:09:42,884 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:42,884 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34515
-2022-08-26 14:09:42,884 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:42,884 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:42,884 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:42,898 - distributed.scheduler - INFO - Receive client connection: Client-67b1148b-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:42,898 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:42,930 - distributed.scheduler - INFO - Remove client Client-67b1148b-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:42,931 - distributed.scheduler - INFO - Remove client Client-67b1148b-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:42,931 - distributed.scheduler - INFO - Close client connection: Client-67b1148b-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:42,932 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:34153
-2022-08-26 14:09:42,932 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:44677
-2022-08-26 14:09:42,933 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-257c8d8d-256c-4940-b489-4381be3d7891 Address tcp://127.0.0.1:34153 Status: Status.closing
-2022-08-26 14:09:42,934 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:34153', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:42,934 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:34153
-2022-08-26 14:09:42,934 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-81087d45-aef5-4e57-a437-dd784585234f Address tcp://127.0.0.1:44677 Status: Status.closing
-2022-08-26 14:09:42,935 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:44677', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:42,935 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:44677
-2022-08-26 14:09:42,935 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:09:43,422 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:43,423 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:43,630 - distributed.utils_perf - WARNING - full garbage collections took 78% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_default_task_duration_splits 2022-08-26 14:09:43,636 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:43,637 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:43,638 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:40737
-2022-08-26 14:09:43,638 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:37673
-2022-08-26 14:09:43,642 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36001
-2022-08-26 14:09:43,642 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36001
-2022-08-26 14:09:43,642 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:09:43,642 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43545
-2022-08-26 14:09:43,642 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:40737
-2022-08-26 14:09:43,642 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:43,642 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:43,643 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:43,643 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-8etowkrg
-2022-08-26 14:09:43,643 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:43,643 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39259
-2022-08-26 14:09:43,643 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39259
-2022-08-26 14:09:43,643 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:09:43,643 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39683
-2022-08-26 14:09:43,643 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:40737
-2022-08-26 14:09:43,643 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:43,643 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:09:43,644 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:43,644 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-kpfw0i6v
-2022-08-26 14:09:43,644 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:43,646 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36001', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:43,647 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36001
-2022-08-26 14:09:43,647 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:43,647 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39259', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:43,647 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39259
-2022-08-26 14:09:43,648 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:43,648 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:40737
-2022-08-26 14:09:43,648 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:43,648 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:40737
-2022-08-26 14:09:43,648 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:43,648 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:43,648 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:43,662 - distributed.scheduler - INFO - Receive client connection: Client-6825b1ac-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:43,662 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:44,021 - distributed.scheduler - INFO - Remove client Client-6825b1ac-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:44,022 - distributed.scheduler - INFO - Remove client Client-6825b1ac-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:44,022 - distributed.scheduler - INFO - Close client connection: Client-6825b1ac-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:44,022 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36001
-2022-08-26 14:09:44,023 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:39259
-2022-08-26 14:09:44,024 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:36001', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:44,024 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:36001
-2022-08-26 14:09:44,024 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:39259', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:44,024 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:39259
-2022-08-26 14:09:44,024 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:09:44,024 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-90de8ac6-fed3-49fb-bf2a-1f32bde8a307 Address tcp://127.0.0.1:36001 Status: Status.closing
-2022-08-26 14:09:44,024 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-04df05ef-6b52-4829-8090-4f24a949dd7f Address tcp://127.0.0.1:39259 Status: Status.closing
-2022-08-26 14:09:44,026 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:44,026 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:44,238 - distributed.utils_perf - WARNING - full garbage collections took 76% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_no_dangling_asyncio_tasks 2022-08-26 14:09:44,263 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:44,265 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:44,265 - distributed.scheduler - INFO -   Scheduler at: tcp://192.168.1.159:42463
-2022-08-26 14:09:44,265 - distributed.scheduler - INFO -   dashboard at:                    :36355
-2022-08-26 14:09:44,268 - distributed.worker - INFO -       Start worker at:  tcp://192.168.1.159:40623
-2022-08-26 14:09:44,268 - distributed.worker - INFO -          Listening to:  tcp://192.168.1.159:40623
-2022-08-26 14:09:44,268 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:09:44,268 - distributed.worker - INFO -          dashboard at:        192.168.1.159:38689
-2022-08-26 14:09:44,268 - distributed.worker - INFO - Waiting to connect to:  tcp://192.168.1.159:42463
-2022-08-26 14:09:44,268 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:44,268 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:09:44,268 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:44,268 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-xa2ymacu
-2022-08-26 14:09:44,268 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:44,270 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://192.168.1.159:40623', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:44,271 - distributed.scheduler - INFO - Starting worker compute stream, tcp://192.168.1.159:40623
-2022-08-26 14:09:44,271 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:44,271 - distributed.worker - INFO -         Registered to:  tcp://192.168.1.159:42463
-2022-08-26 14:09:44,271 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:44,271 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:44,274 - distributed.scheduler - INFO - Receive client connection: Client-68830b35-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:44,274 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:44,298 - distributed.scheduler - INFO - Remove client Client-68830b35-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:44,298 - distributed.scheduler - INFO - Remove client Client-68830b35-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:44,298 - distributed.scheduler - INFO - Close client connection: Client-68830b35-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:44,299 - distributed.worker - INFO - Stopping worker at tcp://192.168.1.159:40623
-2022-08-26 14:09:44,300 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-0eca106a-757d-4e4c-848c-79f297683fe7 Address tcp://192.168.1.159:40623 Status: Status.closing
-2022-08-26 14:09:44,300 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://192.168.1.159:40623', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:44,300 - distributed.core - INFO - Removing comms to tcp://192.168.1.159:40623
-2022-08-26 14:09:44,300 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:09:44,301 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:44,301 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_scheduler.py::test_task_groups 2022-08-26 14:09:44,307 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:44,308 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:44,309 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:42929
-2022-08-26 14:09:44,309 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:43019
-2022-08-26 14:09:44,313 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38441
-2022-08-26 14:09:44,313 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38441
-2022-08-26 14:09:44,313 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:09:44,313 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45959
-2022-08-26 14:09:44,313 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:42929
-2022-08-26 14:09:44,313 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:44,313 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:44,313 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:44,314 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-6gd9bubv
-2022-08-26 14:09:44,314 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:44,314 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38905
-2022-08-26 14:09:44,314 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38905
-2022-08-26 14:09:44,314 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:09:44,314 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40077
-2022-08-26 14:09:44,314 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:42929
-2022-08-26 14:09:44,314 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:44,314 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:09:44,315 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:44,315 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-lm8rtenp
-2022-08-26 14:09:44,315 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:44,317 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38441', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:44,318 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38441
-2022-08-26 14:09:44,318 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:44,318 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38905', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:44,320 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38905
-2022-08-26 14:09:44,320 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:44,320 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:42929
-2022-08-26 14:09:44,320 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:44,321 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:42929
-2022-08-26 14:09:44,321 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:44,321 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:44,321 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:44,335 - distributed.scheduler - INFO - Receive client connection: Client-688c5017-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:44,335 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:44,378 - distributed.scheduler - INFO - Remove client Client-688c5017-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:44,379 - distributed.scheduler - INFO - Remove client Client-688c5017-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:44,379 - distributed.scheduler - INFO - Close client connection: Client-688c5017-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:44,379 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:38441
-2022-08-26 14:09:44,379 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:38905
-2022-08-26 14:09:44,380 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:38441', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:44,380 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:38441
-2022-08-26 14:09:44,381 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:38905', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:44,381 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:38905
-2022-08-26 14:09:44,381 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:09:44,381 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: NoSchedulerDelayWorker-6fbe4566-9fc2-458a-8998-96b413950a81 Address tcp://127.0.0.1:38441 Status: Status.closing
-2022-08-26 14:09:44,381 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: NoSchedulerDelayWorker-ac57c641-b75e-49dc-88ac-bc21f4d4ac82 Address tcp://127.0.0.1:38905 Status: Status.closing
-2022-08-26 14:09:44,383 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:44,384 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:44,596 - distributed.utils_perf - WARNING - full garbage collections took 76% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_task_prefix 2022-08-26 14:09:44,602 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:44,603 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:44,603 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:42425
-2022-08-26 14:09:44,603 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:38795
-2022-08-26 14:09:44,608 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:32909
-2022-08-26 14:09:44,608 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:32909
-2022-08-26 14:09:44,608 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:09:44,608 - distributed.worker - INFO -          dashboard at:            127.0.0.1:38867
-2022-08-26 14:09:44,608 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:42425
-2022-08-26 14:09:44,608 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:44,608 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:44,608 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:44,608 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-f39a5f_7
-2022-08-26 14:09:44,608 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:44,609 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33397
-2022-08-26 14:09:44,609 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33397
-2022-08-26 14:09:44,609 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:09:44,609 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42759
-2022-08-26 14:09:44,609 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:42425
-2022-08-26 14:09:44,609 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:44,609 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:09:44,609 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:44,609 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-206t_wt3
-2022-08-26 14:09:44,609 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:44,612 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:32909', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:44,612 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:32909
-2022-08-26 14:09:44,612 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:44,613 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33397', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:44,613 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33397
-2022-08-26 14:09:44,613 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:44,613 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:42425
-2022-08-26 14:09:44,613 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:44,613 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:42425
-2022-08-26 14:09:44,613 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:44,614 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:44,614 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:44,627 - distributed.scheduler - INFO - Receive client connection: Client-68b8fd76-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:44,628 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:44,719 - distributed.scheduler - INFO - Remove client Client-68b8fd76-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:44,719 - distributed.scheduler - INFO - Remove client Client-68b8fd76-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:44,719 - distributed.scheduler - INFO - Close client connection: Client-68b8fd76-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:44,720 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:32909
-2022-08-26 14:09:44,720 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:33397
-2022-08-26 14:09:44,721 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:32909', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:44,721 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:32909
-2022-08-26 14:09:44,721 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:33397', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:44,721 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:33397
-2022-08-26 14:09:44,721 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:09:44,721 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-562b3228-5ee9-4dfb-9ade-374adf11d9f8 Address tcp://127.0.0.1:32909 Status: Status.closing
-2022-08-26 14:09:44,722 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-8ba31dd0-2dd6-4c44-a139-c8a7b925d258 Address tcp://127.0.0.1:33397 Status: Status.closing
-2022-08-26 14:09:44,723 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:44,723 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:44,934 - distributed.utils_perf - WARNING - full garbage collections took 76% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_failing_task_increments_suspicious 2022-08-26 14:09:44,939 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:44,941 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:44,941 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:42495
-2022-08-26 14:09:44,941 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:38915
-2022-08-26 14:09:44,947 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:41819'
-2022-08-26 14:09:44,947 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:45427'
-2022-08-26 14:09:45,639 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34245
-2022-08-26 14:09:45,639 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34245
-2022-08-26 14:09:45,639 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:09:45,639 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40565
-2022-08-26 14:09:45,639 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:42495
-2022-08-26 14:09:45,639 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:45,639 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:45,639 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:45,639 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-y1z87pqf
-2022-08-26 14:09:45,639 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:45,648 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42611
-2022-08-26 14:09:45,648 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42611
-2022-08-26 14:09:45,648 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:09:45,648 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43639
-2022-08-26 14:09:45,648 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:42495
-2022-08-26 14:09:45,648 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:45,648 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:09:45,648 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:45,648 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-_zim1dox
-2022-08-26 14:09:45,648 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:45,923 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42611', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:45,924 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42611
-2022-08-26 14:09:45,924 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:45,924 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:42495
-2022-08-26 14:09:45,924 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:45,924 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:45,934 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:34245', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:45,935 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:34245
-2022-08-26 14:09:45,935 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:45,935 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:42495
-2022-08-26 14:09:45,935 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:45,935 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:45,939 - distributed.scheduler - INFO - Receive client connection: Client-69811b49-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:45,939 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:45,952 - distributed.nanny - ERROR - Worker process died unexpectedly
-2022-08-26 14:09:46,076 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:34245', name: 0, status: running, memory: 0, processing: 1>
-2022-08-26 14:09:46,076 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:34245
-2022-08-26 14:09:46,076 - distributed.scheduler - INFO - Task exit-c82ea2d1331521a93741b191018ec492 marked as failed because 0 workers died while trying to run it
-2022-08-26 14:09:46,078 - distributed.nanny - WARNING - Restarting worker
-2022-08-26 14:09:46,089 - distributed.scheduler - INFO - Remove client Client-69811b49-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:46,089 - distributed.scheduler - INFO - Remove client Client-69811b49-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:46,090 - distributed.scheduler - INFO - Close client connection: Client-69811b49-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:46,090 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:41819'.
-2022-08-26 14:09:46,090 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:09:46,090 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:45427'.
-2022-08-26 14:09:46,090 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:09:46,091 - distributed.nanny - ERROR - Error in Nanny killing Worker subprocess
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/nanny.py", line 595, in close
-    await self.kill(timeout=timeout)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/nanny.py", line 386, in kill
-    await self.process.kill(timeout=0.8 * (deadline - time()))
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/nanny.py", line 796, in kill
-    await process.join(wait_timeout)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/process.py", line 311, in join
-    assert self._state.pid is not None, "can only join a started process"
-AssertionError: can only join a started process
-2022-08-26 14:09:46,091 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42611
-2022-08-26 14:09:46,092 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-e499ccda-9aff-4bdf-940d-11216acf856f Address tcp://127.0.0.1:42611 Status: Status.closing
-2022-08-26 14:09:46,093 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:42611', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:46,093 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:42611
-2022-08-26 14:09:46,093 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:09:46,112 - tornado.application - ERROR - Exception in callback functools.partial(<built-in method set_result of _asyncio.Future object at 0x564040c1eae0>, None)
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/tornado/ioloop.py", line 740, in _run_callback
-    ret = callback()
-asyncio.exceptions.InvalidStateError: invalid state
-2022-08-26 14:09:46,237 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:46,237 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:46,446 - distributed.utils_perf - WARNING - full garbage collections took 76% CPU time recently (threshold: 10%)
-2022-08-26 14:09:46,778 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42615
-2022-08-26 14:09:46,778 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42615
-2022-08-26 14:09:46,778 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:09:46,778 - distributed.worker - INFO -          dashboard at:            127.0.0.1:37053
-2022-08-26 14:09:46,778 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:42495
-2022-08-26 14:09:46,778 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:46,778 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:46,778 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:46,779 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-v_u8uv9h
-2022-08-26 14:09:46,779 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:46,779 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42615
-2022-08-26 14:09:46,779 - distributed.worker - INFO - Closed worker has not yet started: Status.init
-2022-08-26 14:09:46,848 - distributed.process - WARNING - [<AsyncProcess Dask Worker process (from Nanny)>] process 647789 exit status was already read will report exitcode 255
-PASSED
-distributed/tests/test_scheduler.py::test_task_group_non_tuple_key 2022-08-26 14:09:46,854 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:46,856 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:46,856 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:43749
-2022-08-26 14:09:46,856 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:41169
-2022-08-26 14:09:46,861 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33485
-2022-08-26 14:09:46,861 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33485
-2022-08-26 14:09:46,861 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:09:46,861 - distributed.worker - INFO -          dashboard at:            127.0.0.1:37771
-2022-08-26 14:09:46,861 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43749
-2022-08-26 14:09:46,861 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:46,861 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:46,861 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:46,861 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-1roz78wy
-2022-08-26 14:09:46,861 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:46,862 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39301
-2022-08-26 14:09:46,862 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39301
-2022-08-26 14:09:46,862 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:09:46,862 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36503
-2022-08-26 14:09:46,862 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43749
-2022-08-26 14:09:46,862 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:46,862 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:09:46,862 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:46,862 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-f7681d_6
-2022-08-26 14:09:46,862 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:46,865 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33485', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:46,865 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33485
-2022-08-26 14:09:46,865 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:46,866 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39301', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:46,866 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39301
-2022-08-26 14:09:46,866 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:46,866 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43749
-2022-08-26 14:09:46,866 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:46,867 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43749
-2022-08-26 14:09:46,867 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:46,867 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:46,867 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:46,881 - distributed.scheduler - INFO - Receive client connection: Client-6a10d07d-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:46,881 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:46,936 - distributed.scheduler - INFO - Remove client Client-6a10d07d-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:46,937 - distributed.scheduler - INFO - Remove client Client-6a10d07d-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:46,937 - distributed.scheduler - INFO - Close client connection: Client-6a10d07d-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:46,938 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:33485
-2022-08-26 14:09:46,939 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:39301
-2022-08-26 14:09:46,939 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-e073311c-ec69-4bd6-a4d2-4f325076282e Address tcp://127.0.0.1:33485 Status: Status.closing
-2022-08-26 14:09:46,940 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-d75de5ed-555e-4d07-ae71-1306a1c86f8e Address tcp://127.0.0.1:39301 Status: Status.closing
-2022-08-26 14:09:46,940 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:33485', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:46,940 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:33485
-2022-08-26 14:09:46,941 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:39301', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:46,941 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:39301
-2022-08-26 14:09:46,941 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:09:46,942 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:46,942 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:47,151 - distributed.utils_perf - WARNING - full garbage collections took 76% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_task_unique_groups 2022-08-26 14:09:47,157 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:47,159 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:47,159 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:41857
-2022-08-26 14:09:47,159 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:34743
-2022-08-26 14:09:47,163 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:41427
-2022-08-26 14:09:47,163 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:41427
-2022-08-26 14:09:47,163 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:09:47,163 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34495
-2022-08-26 14:09:47,163 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:41857
-2022-08-26 14:09:47,164 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:47,164 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:47,164 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:47,164 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-t6m5r6rq
-2022-08-26 14:09:47,164 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:47,164 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42555
-2022-08-26 14:09:47,164 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42555
-2022-08-26 14:09:47,164 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:09:47,164 - distributed.worker - INFO -          dashboard at:            127.0.0.1:37679
-2022-08-26 14:09:47,164 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:41857
-2022-08-26 14:09:47,165 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:47,165 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:09:47,165 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:47,165 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-gmlpwjw2
-2022-08-26 14:09:47,165 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:47,168 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:41427', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:47,168 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:41427
-2022-08-26 14:09:47,168 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:47,168 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42555', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:47,169 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42555
-2022-08-26 14:09:47,169 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:47,169 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:41857
-2022-08-26 14:09:47,169 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:47,169 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:41857
-2022-08-26 14:09:47,169 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:47,170 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:47,170 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:47,183 - distributed.scheduler - INFO - Receive client connection: Client-6a3efef2-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:47,184 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:47,206 - distributed.scheduler - INFO - Remove client Client-6a3efef2-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:47,206 - distributed.scheduler - INFO - Remove client Client-6a3efef2-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:47,207 - distributed.scheduler - INFO - Close client connection: Client-6a3efef2-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:47,208 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:41427
-2022-08-26 14:09:47,208 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42555
-2022-08-26 14:09:47,209 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-6defd033-625e-4c13-9187-4dd67410b1c0 Address tcp://127.0.0.1:41427 Status: Status.closing
-2022-08-26 14:09:47,209 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-d064a975-4d53-413d-b3e1-7908b0e5f91e Address tcp://127.0.0.1:42555 Status: Status.closing
-2022-08-26 14:09:47,210 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:41427', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:47,210 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:41427
-2022-08-26 14:09:47,210 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:42555', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:47,210 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:42555
-2022-08-26 14:09:47,210 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:09:47,211 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:47,212 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:47,420 - distributed.utils_perf - WARNING - full garbage collections took 76% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_task_group_on_fire_and_forget 2022-08-26 14:09:47,426 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:47,427 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:47,428 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:41725
-2022-08-26 14:09:47,428 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:38423
-2022-08-26 14:09:47,432 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37711
-2022-08-26 14:09:47,432 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37711
-2022-08-26 14:09:47,432 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:09:47,432 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36003
-2022-08-26 14:09:47,432 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:41725
-2022-08-26 14:09:47,432 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:47,432 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:47,432 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:47,433 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-m0rn7pzu
-2022-08-26 14:09:47,433 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:47,433 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:41121
-2022-08-26 14:09:47,433 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:41121
-2022-08-26 14:09:47,433 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:09:47,433 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34471
-2022-08-26 14:09:47,433 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:41725
-2022-08-26 14:09:47,433 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:47,433 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:09:47,433 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:47,434 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-iy2rvsmi
-2022-08-26 14:09:47,434 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:47,436 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37711', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:47,437 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37711
-2022-08-26 14:09:47,437 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:47,437 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:41121', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:47,437 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:41121
-2022-08-26 14:09:47,438 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:47,438 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:41725
-2022-08-26 14:09:47,438 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:47,438 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:41725
-2022-08-26 14:09:47,438 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:47,438 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:47,438 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:47,452 - distributed.scheduler - INFO - Receive client connection: Client-6a680313-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:47,452 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:48,475 - distributed.scheduler - INFO - Remove client Client-6a680313-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:48,475 - distributed.scheduler - INFO - Remove client Client-6a680313-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:48,475 - distributed.scheduler - INFO - Close client connection: Client-6a680313-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:48,476 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:37711
-2022-08-26 14:09:48,476 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:41121
-2022-08-26 14:09:48,477 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:37711', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:48,477 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:37711
-2022-08-26 14:09:48,478 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:41121', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:48,478 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:41121
-2022-08-26 14:09:48,478 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:09:48,478 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-7f753850-f598-4c34-8c09-c7cfb5d8c713 Address tcp://127.0.0.1:37711 Status: Status.closing
-2022-08-26 14:09:48,478 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-0ff347c1-e883-449c-bb07-040aedd5b486 Address tcp://127.0.0.1:41121 Status: Status.closing
-2022-08-26 14:09:48,480 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:48,480 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:48,690 - distributed.utils_perf - WARNING - full garbage collections took 76% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_gather_failing_cnn_recover 2022-08-26 14:09:48,695 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:48,697 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:48,697 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:45129
-2022-08-26 14:09:48,697 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:42545
-2022-08-26 14:09:48,702 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34431
-2022-08-26 14:09:48,702 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34431
-2022-08-26 14:09:48,702 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:09:48,702 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46007
-2022-08-26 14:09:48,702 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:45129
-2022-08-26 14:09:48,702 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:48,702 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:48,702 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:48,702 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-6hylkmqn
-2022-08-26 14:09:48,702 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:48,703 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36159
-2022-08-26 14:09:48,703 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36159
-2022-08-26 14:09:48,703 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:09:48,703 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35643
-2022-08-26 14:09:48,703 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:45129
-2022-08-26 14:09:48,703 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:48,703 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:09:48,703 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:48,703 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-07xlmmaq
-2022-08-26 14:09:48,703 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:48,706 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:34431', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:48,706 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:34431
-2022-08-26 14:09:48,706 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:48,707 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36159', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:48,707 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36159
-2022-08-26 14:09:48,707 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:48,707 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:45129
-2022-08-26 14:09:48,707 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:48,707 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:45129
-2022-08-26 14:09:48,707 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:48,708 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:48,708 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:48,721 - distributed.scheduler - INFO - Receive client connection: Client-6b29af5c-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:48,722 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:48,725 - distributed.utils_comm - INFO - Retrying get_data_from_worker after exception in attempt 0/1: 
-2022-08-26 14:09:48,733 - distributed.scheduler - INFO - Remove client Client-6b29af5c-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:48,734 - distributed.scheduler - INFO - Remove client Client-6b29af5c-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:48,734 - distributed.scheduler - INFO - Close client connection: Client-6b29af5c-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:48,735 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:34431
-2022-08-26 14:09:48,735 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36159
-2022-08-26 14:09:48,736 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:36159', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:48,736 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:36159
-2022-08-26 14:09:48,736 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-c990b02d-ff3a-45e4-907f-cac5534ffd19 Address tcp://127.0.0.1:36159 Status: Status.closing
-2022-08-26 14:09:48,736 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-173f38c6-1405-439b-b629-0bea33348b83 Address tcp://127.0.0.1:34431 Status: Status.closing
-2022-08-26 14:09:48,737 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:34431', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:48,737 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:34431
-2022-08-26 14:09:48,737 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:09:48,738 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:48,738 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:48,946 - distributed.utils_perf - WARNING - full garbage collections took 76% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_gather_failing_cnn_error 2022-08-26 14:09:48,952 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:48,953 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:48,953 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:38361
-2022-08-26 14:09:48,954 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:34529
-2022-08-26 14:09:48,958 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36283
-2022-08-26 14:09:48,958 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36283
-2022-08-26 14:09:48,958 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:09:48,958 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41625
-2022-08-26 14:09:48,958 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:38361
-2022-08-26 14:09:48,958 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:48,958 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:48,958 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:48,958 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-_75jsf85
-2022-08-26 14:09:48,958 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:48,959 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34041
-2022-08-26 14:09:48,959 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34041
-2022-08-26 14:09:48,959 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:09:48,959 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41841
-2022-08-26 14:09:48,959 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:38361
-2022-08-26 14:09:48,959 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:48,959 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:09:48,959 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:48,959 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-466j7squ
-2022-08-26 14:09:48,959 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:48,962 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36283', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:48,962 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36283
-2022-08-26 14:09:48,962 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:48,963 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:34041', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:48,963 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:34041
-2022-08-26 14:09:48,963 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:48,963 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:38361
-2022-08-26 14:09:48,963 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:48,964 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:38361
-2022-08-26 14:09:48,964 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:48,964 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:48,964 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:48,978 - distributed.scheduler - INFO - Receive client connection: Client-6b50c855-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:48,978 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:48,981 - distributed.scheduler - ERROR - Couldn't gather keys {'x': ['tcp://127.0.0.1:36283']} state: ['memory'] workers: ['tcp://127.0.0.1:36283']
-NoneType: None
-2022-08-26 14:09:48,982 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:36283', name: 0, status: running, memory: 1, processing: 0>
-2022-08-26 14:09:48,982 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:36283
-2022-08-26 14:09:48,982 - distributed.scheduler - ERROR - Shut down workers that don't have promised key: ['tcp://127.0.0.1:36283'], x
-NoneType: None
-2022-08-26 14:09:48,982 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36283
-2022-08-26 14:09:48,983 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-59d395f6-47fd-49a2-97fb-d596f4dfcd7b Address tcp://127.0.0.1:36283 Status: Status.closing
-2022-08-26 14:09:48,989 - distributed.scheduler - INFO - Remove client Client-6b50c855-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:48,990 - distributed.scheduler - INFO - Remove client Client-6b50c855-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:48,990 - distributed.scheduler - INFO - Close client connection: Client-6b50c855-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:48,990 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:34041
-2022-08-26 14:09:48,991 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:34041', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:48,991 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:34041
-2022-08-26 14:09:48,991 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:09:48,991 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-688d6a43-d9af-4282-917d-65e3c11cbb67 Address tcp://127.0.0.1:34041 Status: Status.closing
-2022-08-26 14:09:48,992 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:48,992 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:49,200 - distributed.utils_perf - WARNING - full garbage collections took 76% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_gather_no_workers 2022-08-26 14:09:49,205 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:49,207 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:49,207 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:40629
-2022-08-26 14:09:49,207 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:35271
-2022-08-26 14:09:49,212 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45539
-2022-08-26 14:09:49,212 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45539
-2022-08-26 14:09:49,212 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:09:49,212 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45367
-2022-08-26 14:09:49,212 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:40629
-2022-08-26 14:09:49,212 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:49,212 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:49,212 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:49,212 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-wwu9ggmp
-2022-08-26 14:09:49,212 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:49,213 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39597
-2022-08-26 14:09:49,213 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39597
-2022-08-26 14:09:49,213 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:09:49,213 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42491
-2022-08-26 14:09:49,213 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:40629
-2022-08-26 14:09:49,213 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:49,213 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:09:49,213 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:49,213 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-d54vek53
-2022-08-26 14:09:49,213 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:49,216 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45539', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:49,216 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45539
-2022-08-26 14:09:49,216 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:49,217 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39597', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:49,217 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39597
-2022-08-26 14:09:49,217 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:49,217 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:40629
-2022-08-26 14:09:49,217 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:49,217 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:40629
-2022-08-26 14:09:49,217 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:49,218 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:49,218 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:49,231 - distributed.scheduler - INFO - Receive client connection: Client-6b77806f-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:49,232 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:50,235 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:45539
-2022-08-26 14:09:50,236 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:45539', name: 0, status: closing, memory: 1, processing: 0>
-2022-08-26 14:09:50,236 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:45539
-2022-08-26 14:09:50,236 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-e10dbe96-a7c8-461b-945a-8df8017ea13d Address tcp://127.0.0.1:45539 Status: Status.closing
-2022-08-26 14:09:50,237 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:39597
-2022-08-26 14:09:50,238 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:39597', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:50,238 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:39597
-2022-08-26 14:09:50,238 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:09:50,238 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-45835cd5-1262-4444-ba1a-bd3ace72aad9 Address tcp://127.0.0.1:39597 Status: Status.closing
-2022-08-26 14:09:50,239 - distributed.scheduler - ERROR - Couldn't gather keys {'x': []} state: [None] workers: []
-NoneType: None
-2022-08-26 14:09:50,239 - distributed.scheduler - ERROR - Shut down workers that don't have promised key: [], x
-NoneType: None
-2022-08-26 14:09:50,250 - distributed.scheduler - INFO - Remove client Client-6b77806f-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:50,250 - distributed.scheduler - INFO - Remove client Client-6b77806f-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:50,251 - distributed.scheduler - INFO - Close client connection: Client-6b77806f-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:50,251 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:50,251 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:50,458 - distributed.utils_perf - WARNING - full garbage collections took 76% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_gather_bad_worker_removed 2022-08-26 14:09:50,464 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:50,466 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:50,466 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:39483
-2022-08-26 14:09:50,466 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:41941
-2022-08-26 14:09:50,470 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35773
-2022-08-26 14:09:50,471 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35773
-2022-08-26 14:09:50,471 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:09:50,471 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34701
-2022-08-26 14:09:50,471 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39483
-2022-08-26 14:09:50,471 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:50,471 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:50,471 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:50,471 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-gdqbz826
-2022-08-26 14:09:50,471 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:50,471 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45367
-2022-08-26 14:09:50,471 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45367
-2022-08-26 14:09:50,471 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:09:50,472 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39771
-2022-08-26 14:09:50,472 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39483
-2022-08-26 14:09:50,472 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:50,472 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:09:50,472 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:50,472 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-5gby2uce
-2022-08-26 14:09:50,472 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:50,475 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:35773', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:50,475 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:35773
-2022-08-26 14:09:50,475 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:50,475 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45367', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:50,476 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45367
-2022-08-26 14:09:50,476 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:50,476 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39483
-2022-08-26 14:09:50,476 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:50,476 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39483
-2022-08-26 14:09:50,476 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:50,476 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:50,476 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:50,490 - distributed.scheduler - INFO - Receive client connection: Client-6c378fb5-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:50,490 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:50,526 - distributed.scheduler - ERROR - Couldn't gather keys {'final': ['tcp://127.0.0.1:35773']} state: ['memory'] workers: ['tcp://127.0.0.1:35773']
-NoneType: None
-2022-08-26 14:09:50,526 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:35773', name: 0, status: running, memory: 2, processing: 0>
-2022-08-26 14:09:50,527 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:35773
-2022-08-26 14:09:50,527 - distributed.scheduler - ERROR - Shut down workers that don't have promised key: ['tcp://127.0.0.1:35773'], final
-NoneType: None
-2022-08-26 14:09:50,529 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:35773
-2022-08-26 14:09:50,530 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-959f89e7-e242-4ab1-8d18-01b9a4742f41 Address tcp://127.0.0.1:35773 Status: Status.closing
-2022-08-26 14:09:50,564 - distributed.scheduler - INFO - Remove client Client-6c378fb5-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:50,564 - distributed.scheduler - INFO - Remove client Client-6c378fb5-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:50,564 - distributed.scheduler - INFO - Close client connection: Client-6c378fb5-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:50,564 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:45367
-2022-08-26 14:09:50,565 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:45367', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:50,565 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:45367
-2022-08-26 14:09:50,565 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:09:50,565 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-2d1b9eaf-0edb-4f1c-87b9-3f90efb5b7ee Address tcp://127.0.0.1:45367 Status: Status.closing
-2022-08-26 14:09:50,566 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:50,566 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:50,774 - distributed.utils_perf - WARNING - full garbage collections took 76% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_too_many_groups 2022-08-26 14:09:50,780 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:50,782 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:50,782 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:43405
-2022-08-26 14:09:50,782 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:35057
-2022-08-26 14:09:50,786 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38619
-2022-08-26 14:09:50,786 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38619
-2022-08-26 14:09:50,786 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:09:50,786 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40861
-2022-08-26 14:09:50,786 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43405
-2022-08-26 14:09:50,786 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:50,786 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:50,787 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:50,787 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-uj5iv6ew
-2022-08-26 14:09:50,787 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:50,787 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40611
-2022-08-26 14:09:50,787 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40611
-2022-08-26 14:09:50,787 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:09:50,787 - distributed.worker - INFO -          dashboard at:            127.0.0.1:38775
-2022-08-26 14:09:50,787 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43405
-2022-08-26 14:09:50,787 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:50,787 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:09:50,787 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:50,788 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-f4z97ph4
-2022-08-26 14:09:50,788 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:50,790 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38619', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:50,791 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38619
-2022-08-26 14:09:50,791 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:50,791 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40611', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:50,791 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40611
-2022-08-26 14:09:50,791 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:50,792 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43405
-2022-08-26 14:09:50,792 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:50,792 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43405
-2022-08-26 14:09:50,792 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:50,792 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:50,792 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:50,806 - distributed.scheduler - INFO - Receive client connection: Client-6c67c338-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:50,806 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:50,843 - distributed.scheduler - INFO - Remove client Client-6c67c338-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:50,843 - distributed.scheduler - INFO - Remove client Client-6c67c338-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:50,843 - distributed.scheduler - INFO - Close client connection: Client-6c67c338-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:50,844 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:38619
-2022-08-26 14:09:50,844 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:40611
-2022-08-26 14:09:50,845 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:38619', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:50,845 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:38619
-2022-08-26 14:09:50,845 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:40611', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:50,845 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:40611
-2022-08-26 14:09:50,846 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:09:50,846 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-2caba9b4-f976-4770-8161-d79bc8955959 Address tcp://127.0.0.1:38619 Status: Status.closing
-2022-08-26 14:09:50,846 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-7ac59e22-f798-449e-baca-b00b6d45abcb Address tcp://127.0.0.1:40611 Status: Status.closing
-2022-08-26 14:09:50,847 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:50,847 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:51,055 - distributed.utils_perf - WARNING - full garbage collections took 76% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_multiple_listeners 2022-08-26 14:09:51,080 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:51,082 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:51,082 - distributed.scheduler - INFO -   Scheduler at: inproc://192.168.1.159/518557/858
-2022-08-26 14:09:51,082 - distributed.scheduler - INFO -   Scheduler at: tcp://192.168.1.159:46011
-2022-08-26 14:09:51,082 - distributed.scheduler - INFO -   dashboard at:           localhost:36735
-2022-08-26 14:09:51,085 - distributed.worker - INFO -       Start worker at: inproc://192.168.1.159/518557/859
-2022-08-26 14:09:51,085 - distributed.worker - INFO -          Listening to:        inproc192.168.1.159
-2022-08-26 14:09:51,085 - distributed.worker - INFO -          dashboard at:        192.168.1.159:41781
-2022-08-26 14:09:51,085 - distributed.worker - INFO - Waiting to connect to: inproc://192.168.1.159/518557/858
-2022-08-26 14:09:51,085 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:51,085 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:09:51,085 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:51,085 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-38kgmdx2
-2022-08-26 14:09:51,086 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:51,086 - distributed.scheduler - INFO - Register worker <WorkerState 'inproc://192.168.1.159/518557/859', status: init, memory: 0, processing: 0>
-2022-08-26 14:09:51,087 - distributed.scheduler - INFO - Starting worker compute stream, inproc://192.168.1.159/518557/859
-2022-08-26 14:09:51,087 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:51,087 - distributed.worker - INFO -         Registered to: inproc://192.168.1.159/518557/858
-2022-08-26 14:09:51,087 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:51,089 - distributed.worker - INFO -       Start worker at:  tcp://192.168.1.159:40333
-2022-08-26 14:09:51,089 - distributed.worker - INFO -          Listening to:  tcp://192.168.1.159:40333
-2022-08-26 14:09:51,089 - distributed.worker - INFO -          dashboard at:        192.168.1.159:46809
-2022-08-26 14:09:51,089 - distributed.worker - INFO - Waiting to connect to:  tcp://192.168.1.159:46011
-2022-08-26 14:09:51,089 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:51,089 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:09:51,090 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:51,090 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-j8vbdsso
-2022-08-26 14:09:51,090 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:51,090 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:51,091 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://192.168.1.159:40333', status: init, memory: 0, processing: 0>
-2022-08-26 14:09:51,092 - distributed.scheduler - INFO - Starting worker compute stream, tcp://192.168.1.159:40333
-2022-08-26 14:09:51,092 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:51,092 - distributed.worker - INFO -         Registered to:  tcp://192.168.1.159:46011
-2022-08-26 14:09:51,092 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:51,093 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:51,094 - distributed.scheduler - INFO - Receive client connection: Client-6c93de83-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:51,094 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:51,142 - distributed.scheduler - INFO - Remove client Client-6c93de83-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:51,143 - distributed.scheduler - INFO - Remove client Client-6c93de83-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:51,144 - distributed.scheduler - INFO - Close client connection: Client-6c93de83-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:51,147 - distributed.worker - INFO - Stopping worker at tcp://192.168.1.159:40333
-2022-08-26 14:09:51,147 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-ccc7da50-4d22-487c-9f8d-f3e24e7458fc Address tcp://192.168.1.159:40333 Status: Status.closing
-2022-08-26 14:09:51,148 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://192.168.1.159:40333', status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:51,148 - distributed.core - INFO - Removing comms to tcp://192.168.1.159:40333
-2022-08-26 14:09:51,149 - distributed.worker - INFO - Stopping worker at inproc://192.168.1.159/518557/859
-2022-08-26 14:09:51,150 - distributed.scheduler - INFO - Remove worker <WorkerState 'inproc://192.168.1.159/518557/859', status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:51,150 - distributed.core - INFO - Removing comms to inproc://192.168.1.159/518557/859
-2022-08-26 14:09:51,150 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:09:51,150 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-d51fb819-189a-4600-acfc-549e7bd55a43 Address inproc://192.168.1.159/518557/859 Status: Status.closing
-2022-08-26 14:09:51,151 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:51,151 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_scheduler.py::test_worker_name_collision 2022-08-26 14:09:51,157 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:51,159 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:51,159 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:37881
-2022-08-26 14:09:51,159 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:38613
-2022-08-26 14:09:51,162 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33655
-2022-08-26 14:09:51,162 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33655
-2022-08-26 14:09:51,162 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:09:51,162 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43993
-2022-08-26 14:09:51,162 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37881
-2022-08-26 14:09:51,162 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:51,162 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:51,162 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:51,162 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-lg_41bd_
-2022-08-26 14:09:51,162 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:51,164 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33655', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:51,164 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33655
-2022-08-26 14:09:51,164 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:51,165 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37881
-2022-08-26 14:09:51,165 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:51,165 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:51,178 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36617
-2022-08-26 14:09:51,178 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36617
-2022-08-26 14:09:51,178 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:09:51,178 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42951
-2022-08-26 14:09:51,178 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37881
-2022-08-26 14:09:51,178 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:51,178 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:09:51,178 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:51,178 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-a2d6dkxz
-2022-08-26 14:09:51,178 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:51,180 - distributed.scheduler - WARNING - Worker tried to connect with a duplicate name: 0
-2022-08-26 14:09:51,180 - distributed.worker - ERROR - Unable to connect to scheduler: name taken, 0
-2022-08-26 14:09:51,180 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36617
-2022-08-26 14:09:51,181 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:33655
-2022-08-26 14:09:51,182 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:33655', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:51,182 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:33655
-2022-08-26 14:09:51,182 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:09:51,182 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-1977d5e0-49b3-4526-aa54-5664c86a520c Address tcp://127.0.0.1:33655 Status: Status.closing
-2022-08-26 14:09:51,183 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:51,183 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:51,394 - distributed.utils_perf - WARNING - full garbage collections took 76% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_unknown_task_duration_config 2022-08-26 14:09:51,399 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:51,401 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:51,401 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:41661
-2022-08-26 14:09:51,401 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:42189
-2022-08-26 14:09:51,405 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33575
-2022-08-26 14:09:51,406 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33575
-2022-08-26 14:09:51,406 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:09:51,406 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46753
-2022-08-26 14:09:51,406 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:41661
-2022-08-26 14:09:51,406 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:51,406 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:51,406 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:51,406 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-am9b8gxp
-2022-08-26 14:09:51,406 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:51,406 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35461
-2022-08-26 14:09:51,406 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35461
-2022-08-26 14:09:51,406 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:09:51,407 - distributed.worker - INFO -          dashboard at:            127.0.0.1:38807
-2022-08-26 14:09:51,407 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:41661
-2022-08-26 14:09:51,407 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:51,407 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:09:51,407 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:51,407 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-u95d36m8
-2022-08-26 14:09:51,407 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:51,410 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33575', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:51,410 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33575
-2022-08-26 14:09:51,410 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:51,410 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:35461', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:51,411 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:35461
-2022-08-26 14:09:51,411 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:51,411 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:41661
-2022-08-26 14:09:51,411 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:51,411 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:41661
-2022-08-26 14:09:51,411 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:51,411 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:51,411 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:51,425 - distributed.scheduler - INFO - Receive client connection: Client-6cc63b0a-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:51,425 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:51,469 - distributed.scheduler - INFO - Remove client Client-6cc63b0a-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:51,469 - distributed.scheduler - INFO - Remove client Client-6cc63b0a-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:51,470 - distributed.scheduler - INFO - Close client connection: Client-6cc63b0a-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:51,470 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:33575
-2022-08-26 14:09:51,470 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:35461
-2022-08-26 14:09:51,471 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:33575', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:51,471 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:33575
-2022-08-26 14:09:51,471 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:35461', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:51,472 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:35461
-2022-08-26 14:09:51,472 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:09:51,472 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-2c717a87-1174-4edd-91d7-a3138f358449 Address tcp://127.0.0.1:33575 Status: Status.closing
-2022-08-26 14:09:51,472 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-0af249b5-e786-4d31-88e1-4d70cdf6ded3 Address tcp://127.0.0.1:35461 Status: Status.closing
-2022-08-26 14:09:51,473 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:51,473 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:51,681 - distributed.utils_perf - WARNING - full garbage collections took 76% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_unknown_task_duration_config_2 2022-08-26 14:09:51,687 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:51,688 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:51,688 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:44133
-2022-08-26 14:09:51,688 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:36487
-2022-08-26 14:09:51,693 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40879
-2022-08-26 14:09:51,693 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40879
-2022-08-26 14:09:51,693 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:09:51,693 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39829
-2022-08-26 14:09:51,693 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44133
-2022-08-26 14:09:51,693 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:51,693 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:51,693 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:51,693 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-_k3dugih
-2022-08-26 14:09:51,693 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:51,694 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:46811
-2022-08-26 14:09:51,694 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:46811
-2022-08-26 14:09:51,694 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:09:51,694 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42167
-2022-08-26 14:09:51,694 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44133
-2022-08-26 14:09:51,694 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:51,694 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:09:51,694 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:51,694 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-5yx0n18f
-2022-08-26 14:09:51,694 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:51,697 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40879', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:51,697 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40879
-2022-08-26 14:09:51,697 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:51,698 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:46811', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:51,698 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:46811
-2022-08-26 14:09:51,698 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:51,698 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44133
-2022-08-26 14:09:51,698 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:51,699 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44133
-2022-08-26 14:09:51,699 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:51,699 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:51,699 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:51,710 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:40879
-2022-08-26 14:09:51,710 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:46811
-2022-08-26 14:09:51,711 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:40879', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:51,711 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:40879
-2022-08-26 14:09:51,711 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:46811', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:51,711 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:46811
-2022-08-26 14:09:51,712 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:09:51,712 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-5ba95f4c-1d20-4b70-9d32-e8fd3acfa9cc Address tcp://127.0.0.1:40879 Status: Status.closing
-2022-08-26 14:09:51,712 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-36276905-9f8f-4f93-bf61-bbb1f8ef541f Address tcp://127.0.0.1:46811 Status: Status.closing
-2022-08-26 14:09:51,713 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:51,713 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:51,921 - distributed.utils_perf - WARNING - full garbage collections took 76% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_retire_state_change 2022-08-26 14:09:51,927 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:51,928 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:51,928 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:40569
-2022-08-26 14:09:51,928 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:42251
-2022-08-26 14:09:51,933 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39327
-2022-08-26 14:09:51,933 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39327
-2022-08-26 14:09:51,933 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:09:51,933 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45691
-2022-08-26 14:09:51,933 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:40569
-2022-08-26 14:09:51,933 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:51,933 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:51,933 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:51,933 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-hb8onzfi
-2022-08-26 14:09:51,933 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:51,934 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35159
-2022-08-26 14:09:51,934 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35159
-2022-08-26 14:09:51,934 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:09:51,934 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43921
-2022-08-26 14:09:51,934 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:40569
-2022-08-26 14:09:51,934 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:51,934 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:09:51,934 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:51,934 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ufftldjy
-2022-08-26 14:09:51,934 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:51,937 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39327', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:51,937 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39327
-2022-08-26 14:09:51,937 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:51,938 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:35159', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:51,938 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:35159
-2022-08-26 14:09:51,938 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:51,938 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:40569
-2022-08-26 14:09:51,938 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:51,938 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:40569
-2022-08-26 14:09:51,939 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:51,939 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:51,939 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:51,953 - distributed.scheduler - INFO - Receive client connection: Client-6d16b459-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:51,953 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:51,975 - distributed.scheduler - INFO - Retiring worker tcp://127.0.0.1:39327
-2022-08-26 14:09:51,976 - distributed.active_memory_manager - INFO - Retiring worker tcp://127.0.0.1:39327; 3 keys are being moved away.
-2022-08-26 14:09:51,986 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:39327', name: 0, status: closing_gracefully, memory: 0, processing: 0>
-2022-08-26 14:09:51,987 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:39327
-2022-08-26 14:09:51,987 - distributed.scheduler - INFO - Retired worker tcp://127.0.0.1:39327
-2022-08-26 14:09:51,987 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:39327
-2022-08-26 14:09:51,990 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-0e9060d1-f1ca-4af8-8521-d86c95affabb Address tcp://127.0.0.1:39327 Status: Status.closing
-2022-08-26 14:09:52,031 - distributed.scheduler - INFO - Remove client Client-6d16b459-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:52,032 - distributed.scheduler - INFO - Remove client Client-6d16b459-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:52,032 - distributed.scheduler - INFO - Close client connection: Client-6d16b459-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:52,032 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:35159
-2022-08-26 14:09:52,033 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:35159', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:52,033 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:35159
-2022-08-26 14:09:52,033 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:09:52,033 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-1819b743-5361-4234-ac2d-b1ad74cd09af Address tcp://127.0.0.1:35159 Status: Status.closing
-2022-08-26 14:09:52,034 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:52,034 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:52,243 - distributed.utils_perf - WARNING - full garbage collections took 76% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_configurable_events_log_length 2022-08-26 14:09:52,249 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:52,250 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:52,251 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:37591
-2022-08-26 14:09:52,251 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:40715
-2022-08-26 14:09:52,255 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34719
-2022-08-26 14:09:52,255 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34719
-2022-08-26 14:09:52,255 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:09:52,255 - distributed.worker - INFO -          dashboard at:            127.0.0.1:44305
-2022-08-26 14:09:52,255 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37591
-2022-08-26 14:09:52,255 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:52,255 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:52,255 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:52,255 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-u9l3dkb0
-2022-08-26 14:09:52,255 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:52,256 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37985
-2022-08-26 14:09:52,256 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37985
-2022-08-26 14:09:52,256 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:09:52,256 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40911
-2022-08-26 14:09:52,256 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37591
-2022-08-26 14:09:52,256 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:52,256 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:09:52,256 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:52,256 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-boi3zez8
-2022-08-26 14:09:52,256 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:52,259 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:34719', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:52,259 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:34719
-2022-08-26 14:09:52,260 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:52,260 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37985', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:52,260 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37985
-2022-08-26 14:09:52,260 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:52,260 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37591
-2022-08-26 14:09:52,260 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:52,261 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37591
-2022-08-26 14:09:52,261 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:52,261 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:52,261 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:52,275 - distributed.scheduler - INFO - Receive client connection: Client-6d47df3f-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:52,275 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:52,286 - distributed.scheduler - INFO - Remove client Client-6d47df3f-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:52,286 - distributed.scheduler - INFO - Remove client Client-6d47df3f-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:52,287 - distributed.scheduler - INFO - Close client connection: Client-6d47df3f-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:52,287 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:34719
-2022-08-26 14:09:52,287 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:37985
-2022-08-26 14:09:52,288 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:34719', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:52,288 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:34719
-2022-08-26 14:09:52,288 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:37985', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:52,288 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:37985
-2022-08-26 14:09:52,288 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:09:52,288 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-7ed42592-5f66-4565-9ffc-966518f3e07b Address tcp://127.0.0.1:34719 Status: Status.closing
-2022-08-26 14:09:52,289 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-efd3871b-87a4-445a-9c5b-c052d80e1660 Address tcp://127.0.0.1:37985 Status: Status.closing
-2022-08-26 14:09:52,290 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:52,290 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:52,499 - distributed.utils_perf - WARNING - full garbage collections took 75% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_get_worker_monitor_info 2022-08-26 14:09:52,505 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:52,507 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:52,507 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:34531
-2022-08-26 14:09:52,507 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:39155
-2022-08-26 14:09:52,512 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38251
-2022-08-26 14:09:52,512 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38251
-2022-08-26 14:09:52,512 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:09:52,512 - distributed.worker - INFO -          dashboard at:            127.0.0.1:37733
-2022-08-26 14:09:52,512 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34531
-2022-08-26 14:09:52,512 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:52,512 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:52,512 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:52,512 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-fde08e3e
-2022-08-26 14:09:52,512 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:52,512 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39925
-2022-08-26 14:09:52,513 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39925
-2022-08-26 14:09:52,513 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:09:52,513 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41193
-2022-08-26 14:09:52,513 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34531
-2022-08-26 14:09:52,513 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:52,513 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:09:52,513 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:52,513 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-o8qh1054
-2022-08-26 14:09:52,513 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:52,516 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38251', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:52,516 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38251
-2022-08-26 14:09:52,516 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:52,516 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39925', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:52,517 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39925
-2022-08-26 14:09:52,517 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:52,517 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34531
-2022-08-26 14:09:52,517 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:52,517 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34531
-2022-08-26 14:09:52,517 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:52,518 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:52,518 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:52,531 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:38251
-2022-08-26 14:09:52,532 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:39925
-2022-08-26 14:09:52,532 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:38251', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:52,533 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:38251
-2022-08-26 14:09:52,533 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:39925', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:52,533 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:39925
-2022-08-26 14:09:52,533 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:09:52,533 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-21c313f7-75e3-4531-8377-ed5fb8d5a85b Address tcp://127.0.0.1:38251 Status: Status.closing
-2022-08-26 14:09:52,533 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-327eeb32-95fb-40bd-9480-111f6f6d4a14 Address tcp://127.0.0.1:39925 Status: Status.closing
-2022-08-26 14:09:52,534 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:52,534 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:52,743 - distributed.utils_perf - WARNING - full garbage collections took 76% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_quiet_cluster_round_robin 2022-08-26 14:09:52,749 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:52,751 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:52,751 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:33941
-2022-08-26 14:09:52,751 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:41873
-2022-08-26 14:09:52,755 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34329
-2022-08-26 14:09:52,755 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34329
-2022-08-26 14:09:52,755 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:09:52,756 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41487
-2022-08-26 14:09:52,756 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33941
-2022-08-26 14:09:52,756 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:52,756 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:52,756 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:52,756 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-n5ai89lv
-2022-08-26 14:09:52,756 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:52,756 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39589
-2022-08-26 14:09:52,756 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39589
-2022-08-26 14:09:52,756 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:09:52,756 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33053
-2022-08-26 14:09:52,756 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33941
-2022-08-26 14:09:52,757 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:52,757 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:09:52,757 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:52,757 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-qtcmxg88
-2022-08-26 14:09:52,757 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:52,760 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:34329', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:52,760 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:34329
-2022-08-26 14:09:52,760 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:52,760 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39589', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:52,761 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39589
-2022-08-26 14:09:52,761 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:52,761 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33941
-2022-08-26 14:09:52,761 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:52,761 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33941
-2022-08-26 14:09:52,761 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:52,761 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:52,762 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:52,775 - distributed.scheduler - INFO - Receive client connection: Client-6d943d26-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:52,775 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:52,820 - distributed.scheduler - INFO - Remove client Client-6d943d26-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:52,821 - distributed.scheduler - INFO - Remove client Client-6d943d26-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:52,821 - distributed.scheduler - INFO - Close client connection: Client-6d943d26-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:52,822 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:34329
-2022-08-26 14:09:52,822 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:39589
-2022-08-26 14:09:52,823 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:39589', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:52,823 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:39589
-2022-08-26 14:09:52,823 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-b79f88ec-cf3e-4ba2-91e0-a7ab4e173a66 Address tcp://127.0.0.1:39589 Status: Status.closing
-2022-08-26 14:09:52,824 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-829eec29-29d2-4f5c-b59e-8bbb54b97202 Address tcp://127.0.0.1:34329 Status: Status.closing
-2022-08-26 14:09:52,824 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:34329', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:52,824 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:34329
-2022-08-26 14:09:52,824 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:09:52,825 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:52,825 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:53,035 - distributed.utils_perf - WARNING - full garbage collections took 76% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_memorystate PASSED
-distributed/tests/test_scheduler.py::test_memorystate_sum PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[0-0-0-0] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[0-0-0-1] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[0-0-0-2] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[0-0-0-3] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[0-0-1-0] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[0-0-1-1] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[0-0-1-2] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[0-0-1-3] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[0-0-2-0] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[0-0-2-1] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[0-0-2-2] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[0-0-2-3] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[0-0-3-0] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[0-0-3-1] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[0-0-3-2] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[0-0-3-3] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[0-1-0-0] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[0-1-0-1] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[0-1-0-2] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[0-1-0-3] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[0-1-1-0] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[0-1-1-1] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[0-1-1-2] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[0-1-1-3] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[0-1-2-0] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[0-1-2-1] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[0-1-2-2] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[0-1-2-3] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[0-1-3-0] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[0-1-3-1] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[0-1-3-2] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[0-1-3-3] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[0-2-0-0] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[0-2-0-1] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[0-2-0-2] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[0-2-0-3] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[0-2-1-0] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[0-2-1-1] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[0-2-1-2] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[0-2-1-3] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[0-2-2-0] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[0-2-2-1] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[0-2-2-2] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[0-2-2-3] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[0-2-3-0] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[0-2-3-1] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[0-2-3-2] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[0-2-3-3] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[0-3-0-0] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[0-3-0-1] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[0-3-0-2] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[0-3-0-3] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[0-3-1-0] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[0-3-1-1] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[0-3-1-2] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[0-3-1-3] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[0-3-2-0] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[0-3-2-1] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[0-3-2-2] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[0-3-2-3] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[0-3-3-0] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[0-3-3-1] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[0-3-3-2] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[0-3-3-3] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[1-0-0-0] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[1-0-0-1] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[1-0-0-2] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[1-0-0-3] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[1-0-1-0] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[1-0-1-1] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[1-0-1-2] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[1-0-1-3] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[1-0-2-0] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[1-0-2-1] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[1-0-2-2] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[1-0-2-3] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[1-0-3-0] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[1-0-3-1] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[1-0-3-2] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[1-0-3-3] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[1-1-0-0] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[1-1-0-1] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[1-1-0-2] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[1-1-0-3] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[1-1-1-0] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[1-1-1-1] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[1-1-1-2] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[1-1-1-3] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[1-1-2-0] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[1-1-2-1] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[1-1-2-2] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[1-1-2-3] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[1-1-3-0] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[1-1-3-1] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[1-1-3-2] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[1-1-3-3] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[1-2-0-0] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[1-2-0-1] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[1-2-0-2] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[1-2-0-3] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[1-2-1-0] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[1-2-1-1] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[1-2-1-2] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[1-2-1-3] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[1-2-2-0] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[1-2-2-1] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[1-2-2-2] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[1-2-2-3] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[1-2-3-0] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[1-2-3-1] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[1-2-3-2] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[1-2-3-3] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[1-3-0-0] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[1-3-0-1] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[1-3-0-2] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[1-3-0-3] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[1-3-1-0] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[1-3-1-1] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[1-3-1-2] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[1-3-1-3] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[1-3-2-0] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[1-3-2-1] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[1-3-2-2] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[1-3-2-3] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[1-3-3-0] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[1-3-3-1] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[1-3-3-2] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[1-3-3-3] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[2-0-0-0] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[2-0-0-1] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[2-0-0-2] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[2-0-0-3] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[2-0-1-0] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[2-0-1-1] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[2-0-1-2] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[2-0-1-3] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[2-0-2-0] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[2-0-2-1] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[2-0-2-2] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[2-0-2-3] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[2-0-3-0] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[2-0-3-1] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[2-0-3-2] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[2-0-3-3] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[2-1-0-0] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[2-1-0-1] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[2-1-0-2] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[2-1-0-3] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[2-1-1-0] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[2-1-1-1] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[2-1-1-2] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[2-1-1-3] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[2-1-2-0] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[2-1-2-1] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[2-1-2-2] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[2-1-2-3] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[2-1-3-0] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[2-1-3-1] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[2-1-3-2] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[2-1-3-3] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[2-2-0-0] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[2-2-0-1] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[2-2-0-2] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[2-2-0-3] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[2-2-1-0] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[2-2-1-1] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[2-2-1-2] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[2-2-1-3] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[2-2-2-0] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[2-2-2-1] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[2-2-2-2] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[2-2-2-3] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[2-2-3-0] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[2-2-3-1] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[2-2-3-2] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[2-2-3-3] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[2-3-0-0] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[2-3-0-1] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[2-3-0-2] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[2-3-0-3] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[2-3-1-0] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[2-3-1-1] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[2-3-1-2] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[2-3-1-3] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[2-3-2-0] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[2-3-2-1] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[2-3-2-2] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[2-3-2-3] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[2-3-3-0] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[2-3-3-1] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[2-3-3-2] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[2-3-3-3] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[3-0-0-0] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[3-0-0-1] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[3-0-0-2] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[3-0-0-3] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[3-0-1-0] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[3-0-1-1] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[3-0-1-2] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[3-0-1-3] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[3-0-2-0] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[3-0-2-1] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[3-0-2-2] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[3-0-2-3] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[3-0-3-0] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[3-0-3-1] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[3-0-3-2] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[3-0-3-3] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[3-1-0-0] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[3-1-0-1] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[3-1-0-2] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[3-1-0-3] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[3-1-1-0] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[3-1-1-1] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[3-1-1-2] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[3-1-1-3] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[3-1-2-0] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[3-1-2-1] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[3-1-2-2] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[3-1-2-3] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[3-1-3-0] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[3-1-3-1] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[3-1-3-2] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[3-1-3-3] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[3-2-0-0] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[3-2-0-1] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[3-2-0-2] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[3-2-0-3] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[3-2-1-0] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[3-2-1-1] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[3-2-1-2] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[3-2-1-3] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[3-2-2-0] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[3-2-2-1] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[3-2-2-2] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[3-2-2-3] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[3-2-3-0] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[3-2-3-1] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[3-2-3-2] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[3-2-3-3] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[3-3-0-0] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[3-3-0-1] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[3-3-0-2] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[3-3-0-3] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[3-3-1-0] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[3-3-1-1] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[3-3-1-2] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[3-3-1-3] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[3-3-2-0] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[3-3-2-1] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[3-3-2-2] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[3-3-2-3] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[3-3-3-0] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[3-3-3-1] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[3-3-3-2] PASSED
-distributed/tests/test_scheduler.py::test_memorystate_adds_up[3-3-3-3] PASSED
-distributed/tests/test_scheduler.py::test_memory SKIPPED (need --run...)
-distributed/tests/test_scheduler.py::test_memory_no_zict 2022-08-26 14:09:53,323 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:53,325 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:53,325 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:46329
-2022-08-26 14:09:53,325 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:33571
-2022-08-26 14:09:53,331 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43929
-2022-08-26 14:09:53,331 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43929
-2022-08-26 14:09:53,331 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:09:53,331 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33123
-2022-08-26 14:09:53,331 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:46329
-2022-08-26 14:09:53,331 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:53,331 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:53,331 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-x7p9amtg
-2022-08-26 14:09:53,331 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:53,332 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38075
-2022-08-26 14:09:53,332 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38075
-2022-08-26 14:09:53,332 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:09:53,332 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34409
-2022-08-26 14:09:53,332 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:46329
-2022-08-26 14:09:53,332 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:53,332 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:09:53,332 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-9ux_dw93
-2022-08-26 14:09:53,332 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:53,335 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:43929', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:53,335 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:43929
-2022-08-26 14:09:53,335 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:53,336 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38075', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:53,336 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38075
-2022-08-26 14:09:53,336 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:53,336 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:46329
-2022-08-26 14:09:53,336 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:53,336 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:46329
-2022-08-26 14:09:53,337 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:53,337 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:53,337 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:53,350 - distributed.scheduler - INFO - Receive client connection: Client-6dec084a-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:53,351 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:53,388 - distributed.scheduler - INFO - Remove client Client-6dec084a-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:53,388 - distributed.scheduler - INFO - Remove client Client-6dec084a-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:53,388 - distributed.scheduler - INFO - Close client connection: Client-6dec084a-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:53,389 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:43929
-2022-08-26 14:09:53,389 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:38075
-2022-08-26 14:09:53,390 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:43929', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:53,390 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:43929
-2022-08-26 14:09:53,390 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:38075', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:53,390 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:38075
-2022-08-26 14:09:53,390 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:09:53,390 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-30073d17-0235-4bec-9bc2-028dfc23ddc8 Address tcp://127.0.0.1:43929 Status: Status.closing
-2022-08-26 14:09:53,391 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-0ae9babe-ae38-4bc3-bae0-26a5f70b4d23 Address tcp://127.0.0.1:38075 Status: Status.closing
-2022-08-26 14:09:53,392 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:53,392 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:53,606 - distributed.utils_perf - WARNING - full garbage collections took 73% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_memory_no_workers 2022-08-26 14:09:53,612 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:53,613 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:53,613 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:43487
-2022-08-26 14:09:53,613 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:44687
-2022-08-26 14:09:53,614 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:53,614 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:53,827 - distributed.utils_perf - WARNING - full garbage collections took 73% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_memory_is_none 2022-08-26 14:09:53,832 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:53,834 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:53,834 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:41889
-2022-08-26 14:09:53,834 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:44965
-2022-08-26 14:09:53,837 - distributed.scheduler - INFO - Receive client connection: Client-6e364616-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:53,837 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:53,840 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:44005
-2022-08-26 14:09:53,840 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:44005
-2022-08-26 14:09:53,841 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43633
-2022-08-26 14:09:53,841 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:41889
-2022-08-26 14:09:53,841 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:53,841 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:53,841 - distributed.worker - INFO -                Memory:                   5.24 GiB
-2022-08-26 14:09:53,841 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-_dds55le
-2022-08-26 14:09:53,841 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:53,843 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:44005', status: init, memory: 0, processing: 0>
-2022-08-26 14:09:53,843 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:44005
-2022-08-26 14:09:53,843 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:53,843 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:41889
-2022-08-26 14:09:53,843 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:53,844 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:53,950 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:44005
-2022-08-26 14:09:53,950 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:44005', status: closing, memory: 1, processing: 0>
-2022-08-26 14:09:53,950 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:44005
-2022-08-26 14:09:53,951 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:09:53,951 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-a75e642f-fe5d-48b9-93c6-e70c5bf2f4cb Address tcp://127.0.0.1:44005 Status: Status.closing
-2022-08-26 14:09:53,964 - distributed.scheduler - INFO - Remove client Client-6e364616-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:53,964 - distributed.scheduler - INFO - Remove client Client-6e364616-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:53,964 - distributed.scheduler - INFO - Close client connection: Client-6e364616-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:53,964 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:53,965 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:54,177 - distributed.utils_perf - WARNING - full garbage collections took 73% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_close_scheduler__close_workers_Worker 2022-08-26 14:09:54,183 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:54,185 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:54,185 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:37875
-2022-08-26 14:09:54,185 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:39967
-2022-08-26 14:09:54,189 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37883
-2022-08-26 14:09:54,189 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37883
-2022-08-26 14:09:54,189 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:09:54,189 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36101
-2022-08-26 14:09:54,189 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37875
-2022-08-26 14:09:54,190 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:54,190 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:54,190 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:54,190 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-blmd3p4q
-2022-08-26 14:09:54,190 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:54,190 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42477
-2022-08-26 14:09:54,190 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42477
-2022-08-26 14:09:54,190 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:09:54,190 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34195
-2022-08-26 14:09:54,190 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37875
-2022-08-26 14:09:54,190 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:54,190 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:09:54,191 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:54,191 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-xi1vxe3k
-2022-08-26 14:09:54,191 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:54,193 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37883', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:54,194 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37883
-2022-08-26 14:09:54,194 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:54,194 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42477', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:54,194 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42477
-2022-08-26 14:09:54,195 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:54,195 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37875
-2022-08-26 14:09:54,195 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:54,195 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37875
-2022-08-26 14:09:54,195 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:54,195 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:54,195 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:54,206 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:54,207 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:54,207 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:37883', name: 0, status: running, memory: 0, processing: 0>
-2022-08-26 14:09:54,207 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:37883
-2022-08-26 14:09:54,207 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:42477', name: 1, status: running, memory: 0, processing: 0>
-2022-08-26 14:09:54,207 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:42477
-2022-08-26 14:09:54,208 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:09:54,208 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:37883
-2022-08-26 14:09:54,208 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42477
-2022-08-26 14:09:54,208 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-e0b3e9e0-b9c1-45fb-812d-23e1364cbf80 Address tcp://127.0.0.1:37883 Status: Status.closing
-2022-08-26 14:09:54,208 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-a4839ad3-ece0-47aa-b197-02a9b744b589 Address tcp://127.0.0.1:42477 Status: Status.closing
-2022-08-26 14:09:54,209 - distributed.batched - INFO - Batched Comm Closed <TCP (closed) Worker->Scheduler local=tcp://127.0.0.1:45132 remote=tcp://127.0.0.1:37875>
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/batched.py", line 115, in _background_send
-    nbytes = yield coro
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/tornado/gen.py", line 769, in run
-    value = future.result()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 269, in write
-    raise CommClosedError()
-distributed.comm.core.CommClosedError
-2022-08-26 14:09:54,209 - distributed.batched - INFO - Batched Comm Closed <TCP (closed) Worker->Scheduler local=tcp://127.0.0.1:45134 remote=tcp://127.0.0.1:37875>
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/batched.py", line 115, in _background_send
-    nbytes = yield coro
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/tornado/gen.py", line 769, in run
-    value = future.result()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 269, in write
-    raise CommClosedError()
-distributed.comm.core.CommClosedError
-2022-08-26 14:09:54,472 - distributed.utils_perf - WARNING - full garbage collections took 74% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_close_scheduler__close_workers_Nanny 2022-08-26 14:09:54,478 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:54,479 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:54,479 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:37499
-2022-08-26 14:09:54,479 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:38205
-2022-08-26 14:09:54,485 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:43711'
-2022-08-26 14:09:54,485 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:33657'
-2022-08-26 14:09:55,183 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36133
-2022-08-26 14:09:55,183 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36133
-2022-08-26 14:09:55,183 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:09:55,183 - distributed.worker - INFO -          dashboard at:            127.0.0.1:38565
-2022-08-26 14:09:55,183 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37499
-2022-08-26 14:09:55,183 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:55,183 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:09:55,183 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:55,183 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-s0knj1vf
-2022-08-26 14:09:55,183 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:55,184 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43445
-2022-08-26 14:09:55,185 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43445
-2022-08-26 14:09:55,185 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:09:55,185 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45499
-2022-08-26 14:09:55,185 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37499
-2022-08-26 14:09:55,185 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:55,185 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:55,185 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:55,185 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-1eoe0agb
-2022-08-26 14:09:55,185 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:55,459 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:43445', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:55,460 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:43445
-2022-08-26 14:09:55,460 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:55,460 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37499
-2022-08-26 14:09:55,460 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:55,460 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:55,477 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36133', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:55,477 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36133
-2022-08-26 14:09:55,477 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:55,478 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37499
-2022-08-26 14:09:55,478 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:55,478 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:55,526 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:55,527 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:55,527 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:43445
-2022-08-26 14:09:55,527 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36133
-2022-08-26 14:09:55,527 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:43445', name: 0, status: running, memory: 0, processing: 0>
-2022-08-26 14:09:55,527 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:43445
-2022-08-26 14:09:55,527 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-594b1f4c-958a-4db0-865c-779ecae53bae Address tcp://127.0.0.1:43445 Status: Status.closing
-2022-08-26 14:09:55,527 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:36133', name: 1, status: running, memory: 0, processing: 0>
-2022-08-26 14:09:55,527 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-90b91291-0a19-4889-8b4e-4eab52139a1b Address tcp://127.0.0.1:36133 Status: Status.closing
-2022-08-26 14:09:55,527 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:36133
-2022-08-26 14:09:55,527 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:09:55,528 - distributed.comm.tcp - DEBUG - Incoming connection from 'tcp://127.0.0.1:44678' to 'tcp://127.0.0.1:43711'
-2022-08-26 14:09:55,528 - distributed.batched - INFO - Batched Comm Closed <TCP (closed) Worker->Scheduler local=tcp://127.0.0.1:33354 remote=tcp://127.0.0.1:37499>
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/batched.py", line 115, in _background_send
-    nbytes = yield coro
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/tornado/gen.py", line 769, in run
-    value = future.result()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 269, in write
-    raise CommClosedError()
-distributed.comm.core.CommClosedError
-2022-08-26 14:09:55,528 - distributed.comm.tcp - DEBUG - Setting TCP keepalive: nprobes=10, idle=10, interval=2
-2022-08-26 14:09:55,528 - distributed.batched - INFO - Batched Comm Closed <TCP (closed) Worker->Scheduler local=tcp://127.0.0.1:33362 remote=tcp://127.0.0.1:37499>
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/batched.py", line 115, in _background_send
-    nbytes = yield coro
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/tornado/gen.py", line 769, in run
-    value = future.result()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 269, in write
-    raise CommClosedError()
-distributed.comm.core.CommClosedError
-2022-08-26 14:09:55,528 - distributed.comm.tcp - DEBUG - Setting TCP user timeout: 30000 ms
-2022-08-26 14:09:55,528 - distributed.comm.tcp - DEBUG - Incoming connection from 'tcp://127.0.0.1:47464' to 'tcp://127.0.0.1:33657'
-2022-08-26 14:09:55,528 - distributed.comm.tcp - DEBUG - Setting TCP keepalive: nprobes=10, idle=10, interval=2
-2022-08-26 14:09:55,528 - distributed.comm.tcp - DEBUG - Setting TCP user timeout: 30000 ms
-2022-08-26 14:09:55,530 - distributed.nanny - INFO - Worker closed
-2022-08-26 14:09:55,530 - distributed.nanny - INFO - Worker closed
-2022-08-26 14:09:55,530 - distributed.nanny - ERROR - Worker process died unexpectedly
-2022-08-26 14:09:55,530 - distributed.nanny - ERROR - Worker process died unexpectedly
-2022-08-26 14:09:55,657 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:43711'.
-2022-08-26 14:09:55,658 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:33657'.
-2022-08-26 14:09:55,892 - distributed.utils_perf - WARNING - full garbage collections took 74% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_rebalance 2022-08-26 14:09:55,897 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:55,899 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:55,899 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:33189
-2022-08-26 14:09:55,899 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:39119
-2022-08-26 14:09:55,904 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:42359'
-2022-08-26 14:09:55,904 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:46471'
-2022-08-26 14:09:56,604 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36733
-2022-08-26 14:09:56,604 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36733
-2022-08-26 14:09:56,604 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:09:56,604 - distributed.worker - INFO -          dashboard at:            127.0.0.1:37799
-2022-08-26 14:09:56,604 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33189
-2022-08-26 14:09:56,604 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:56,604 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:56,604 - distributed.worker - INFO -                Memory:                   1.00 GiB
-2022-08-26 14:09:56,604 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-yp6ml9uv
-2022-08-26 14:09:56,604 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:56,606 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39783
-2022-08-26 14:09:56,606 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39783
-2022-08-26 14:09:56,606 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:09:56,606 - distributed.worker - INFO -          dashboard at:            127.0.0.1:37657
-2022-08-26 14:09:56,606 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33189
-2022-08-26 14:09:56,606 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:56,606 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:09:56,606 - distributed.worker - INFO -                Memory:                   1.00 GiB
-2022-08-26 14:09:56,606 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-_w4khs2d
-2022-08-26 14:09:56,606 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:56,882 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39783', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:56,883 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39783
-2022-08-26 14:09:56,883 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:56,883 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33189
-2022-08-26 14:09:56,883 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:56,883 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:56,899 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36733', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:56,899 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36733
-2022-08-26 14:09:56,899 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:56,899 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33189
-2022-08-26 14:09:56,899 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:56,900 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:56,948 - distributed.scheduler - INFO - Receive client connection: Client-7010e94c-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:56,948 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:57,911 - distributed.worker - INFO - Run out-of-band function 'lambda'
-2022-08-26 14:09:57,911 - distributed.worker - INFO - Run out-of-band function 'lambda'
-2022-08-26 14:09:58,225 - distributed.worker - INFO - Run out-of-band function 'lambda'
-2022-08-26 14:09:58,225 - distributed.worker - INFO - Run out-of-band function 'lambda'
-2022-08-26 14:09:58,346 - distributed.worker - INFO - Run out-of-band function 'lambda'
-2022-08-26 14:09:58,347 - distributed.worker - INFO - Run out-of-band function 'lambda'
-2022-08-26 14:09:58,373 - distributed.scheduler - INFO - Remove client Client-7010e94c-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:58,373 - distributed.scheduler - INFO - Remove client Client-7010e94c-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:58,374 - distributed.scheduler - INFO - Close client connection: Client-7010e94c-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:58,374 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:42359'.
-2022-08-26 14:09:58,374 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:09:58,374 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:46471'.
-2022-08-26 14:09:58,375 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:09:58,399 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36733
-2022-08-26 14:09:58,400 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-0ae5f7c9-7097-4bbc-a4a1-34b60a72fa78 Address tcp://127.0.0.1:36733 Status: Status.closing
-2022-08-26 14:09:58,401 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:36733', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:58,401 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:36733
-2022-08-26 14:09:58,406 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:39783
-2022-08-26 14:09:58,406 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-58156f8c-0215-46d8-85a3-94852bfd6fe9 Address tcp://127.0.0.1:39783 Status: Status.closing
-2022-08-26 14:09:58,407 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:39783', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:58,407 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:39783
-2022-08-26 14:09:58,407 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:09:58,555 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:58,555 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:58,770 - distributed.utils_perf - WARNING - full garbage collections took 72% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_rebalance_managed_memory 2022-08-26 14:09:58,776 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:58,777 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:58,777 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:37363
-2022-08-26 14:09:58,778 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:44735
-2022-08-26 14:09:58,782 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42045
-2022-08-26 14:09:58,782 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42045
-2022-08-26 14:09:58,782 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:09:58,782 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45471
-2022-08-26 14:09:58,782 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37363
-2022-08-26 14:09:58,782 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:58,782 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:58,782 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:58,782 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-opihe5l7
-2022-08-26 14:09:58,782 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:58,783 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:44699
-2022-08-26 14:09:58,783 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:44699
-2022-08-26 14:09:58,783 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:09:58,783 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36645
-2022-08-26 14:09:58,783 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37363
-2022-08-26 14:09:58,783 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:58,783 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:09:58,783 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:58,783 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-juom4swc
-2022-08-26 14:09:58,783 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:58,786 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42045', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:58,787 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42045
-2022-08-26 14:09:58,787 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:58,787 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:44699', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:58,787 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:44699
-2022-08-26 14:09:58,787 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:58,788 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37363
-2022-08-26 14:09:58,788 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:58,788 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37363
-2022-08-26 14:09:58,788 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:58,788 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:58,788 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:58,802 - distributed.scheduler - INFO - Receive client connection: Client-712bd4cd-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:58,802 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:58,839 - distributed.scheduler - INFO - Remove client Client-712bd4cd-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:58,840 - distributed.scheduler - INFO - Remove client Client-712bd4cd-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:58,840 - distributed.scheduler - INFO - Close client connection: Client-712bd4cd-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:58,840 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42045
-2022-08-26 14:09:58,841 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:44699
-2022-08-26 14:09:58,842 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:42045', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:58,842 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:42045
-2022-08-26 14:09:58,842 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:44699', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:58,842 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:44699
-2022-08-26 14:09:58,842 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:09:58,842 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-82299b79-0656-4117-83f3-c49200d17e30 Address tcp://127.0.0.1:42045 Status: Status.closing
-2022-08-26 14:09:58,842 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-889f640f-72c8-4364-9036-1ab025edf0c6 Address tcp://127.0.0.1:44699 Status: Status.closing
-2022-08-26 14:09:58,844 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:58,844 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:59,058 - distributed.utils_perf - WARNING - full garbage collections took 72% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_rebalance_workers_and_keys 2022-08-26 14:09:59,064 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:59,066 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:59,066 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:41933
-2022-08-26 14:09:59,066 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:37203
-2022-08-26 14:09:59,072 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43893
-2022-08-26 14:09:59,073 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43893
-2022-08-26 14:09:59,073 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:09:59,073 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35655
-2022-08-26 14:09:59,073 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:41933
-2022-08-26 14:09:59,073 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:59,073 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:59,073 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:59,073 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-059ca9wv
-2022-08-26 14:09:59,073 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:59,073 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34871
-2022-08-26 14:09:59,074 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34871
-2022-08-26 14:09:59,074 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:09:59,074 - distributed.worker - INFO -          dashboard at:            127.0.0.1:32961
-2022-08-26 14:09:59,074 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:41933
-2022-08-26 14:09:59,074 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:59,074 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:59,074 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:59,074 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ws7x1tub
-2022-08-26 14:09:59,074 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:59,074 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40193
-2022-08-26 14:09:59,075 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40193
-2022-08-26 14:09:59,075 - distributed.worker - INFO -           Worker name:                          2
-2022-08-26 14:09:59,075 - distributed.worker - INFO -          dashboard at:            127.0.0.1:44249
-2022-08-26 14:09:59,075 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:41933
-2022-08-26 14:09:59,075 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:59,075 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:59,075 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:59,075 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-evd6nzt_
-2022-08-26 14:09:59,075 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:59,079 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:43893', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:59,079 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:43893
-2022-08-26 14:09:59,079 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:59,079 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:34871', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:59,080 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:34871
-2022-08-26 14:09:59,080 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:59,080 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40193', name: 2, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:59,080 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40193
-2022-08-26 14:09:59,080 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:59,081 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:41933
-2022-08-26 14:09:59,081 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:59,081 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:41933
-2022-08-26 14:09:59,081 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:59,081 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:41933
-2022-08-26 14:09:59,081 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:59,081 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:59,082 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:59,082 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:59,095 - distributed.scheduler - INFO - Receive client connection: Client-7158a405-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:59,096 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:59,116 - distributed.scheduler - INFO - Remove client Client-7158a405-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:59,118 - distributed.scheduler - INFO - Remove client Client-7158a405-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:59,118 - distributed.scheduler - INFO - Close client connection: Client-7158a405-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:59,127 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:43893
-2022-08-26 14:09:59,128 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:34871
-2022-08-26 14:09:59,128 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:40193
-2022-08-26 14:09:59,129 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:43893', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:59,129 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:43893
-2022-08-26 14:09:59,130 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:34871', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:59,130 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:34871
-2022-08-26 14:09:59,130 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:40193', name: 2, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:59,130 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:40193
-2022-08-26 14:09:59,130 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:09:59,130 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-9ff8f7a0-4d10-4981-a0c3-b11b4d432bef Address tcp://127.0.0.1:43893 Status: Status.closing
-2022-08-26 14:09:59,130 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-68884072-070a-4d67-a04d-c765ad67929b Address tcp://127.0.0.1:34871 Status: Status.closing
-2022-08-26 14:09:59,130 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-d3b2de98-7a7b-40ff-8f8d-56bc89296515 Address tcp://127.0.0.1:40193 Status: Status.closing
-2022-08-26 14:09:59,132 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:59,132 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:59,346 - distributed.utils_perf - WARNING - full garbage collections took 72% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_rebalance_missing_data1 2022-08-26 14:09:59,353 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:59,354 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:59,355 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:33209
-2022-08-26 14:09:59,355 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:36717
-2022-08-26 14:09:59,359 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43747
-2022-08-26 14:09:59,359 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43747
-2022-08-26 14:09:59,359 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:09:59,359 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39909
-2022-08-26 14:09:59,359 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33209
-2022-08-26 14:09:59,360 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:59,360 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:59,360 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:59,360 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-efmgvl_v
-2022-08-26 14:09:59,360 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:59,360 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42053
-2022-08-26 14:09:59,360 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42053
-2022-08-26 14:09:59,360 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:09:59,360 - distributed.worker - INFO -          dashboard at:            127.0.0.1:37745
-2022-08-26 14:09:59,361 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33209
-2022-08-26 14:09:59,361 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:59,361 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:09:59,361 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:59,361 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-1lguvlow
-2022-08-26 14:09:59,361 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:59,364 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:43747', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:59,364 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:43747
-2022-08-26 14:09:59,364 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:59,365 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42053', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:59,365 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42053
-2022-08-26 14:09:59,365 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:59,365 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33209
-2022-08-26 14:09:59,365 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:59,365 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33209
-2022-08-26 14:09:59,366 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:59,366 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:59,366 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:59,377 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:43747
-2022-08-26 14:09:59,377 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42053
-2022-08-26 14:09:59,378 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:43747', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:59,378 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:43747
-2022-08-26 14:09:59,378 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:42053', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:59,378 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:42053
-2022-08-26 14:09:59,379 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:09:59,379 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-378c7f2e-708b-4cc5-ae7d-f45fdbe9a1eb Address tcp://127.0.0.1:43747 Status: Status.closing
-2022-08-26 14:09:59,379 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-03aa3c36-5d99-47bd-b7d2-175f9640f38d Address tcp://127.0.0.1:42053 Status: Status.closing
-2022-08-26 14:09:59,380 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:59,380 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:59,594 - distributed.utils_perf - WARNING - full garbage collections took 73% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_rebalance_missing_data2 2022-08-26 14:09:59,599 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:59,601 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:59,601 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:40065
-2022-08-26 14:09:59,601 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:44155
-2022-08-26 14:09:59,606 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:44925
-2022-08-26 14:09:59,606 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:44925
-2022-08-26 14:09:59,606 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:09:59,606 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46769
-2022-08-26 14:09:59,606 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:40065
-2022-08-26 14:09:59,606 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:59,606 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:59,606 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:59,606 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-62utnbms
-2022-08-26 14:09:59,606 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:59,607 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:44999
-2022-08-26 14:09:59,607 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:44999
-2022-08-26 14:09:59,607 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:09:59,607 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43555
-2022-08-26 14:09:59,607 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:40065
-2022-08-26 14:09:59,607 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:59,607 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:09:59,607 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:59,607 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ffke_97c
-2022-08-26 14:09:59,607 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:59,610 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:44925', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:59,611 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:44925
-2022-08-26 14:09:59,611 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:59,611 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:44999', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:59,611 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:44999
-2022-08-26 14:09:59,611 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:59,612 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:40065
-2022-08-26 14:09:59,612 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:59,612 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:40065
-2022-08-26 14:09:59,612 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:59,612 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:59,612 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:59,626 - distributed.scheduler - INFO - Receive client connection: Client-71a994e3-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:59,626 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:59,739 - distributed.scheduler - INFO - Remove client Client-71a994e3-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:59,739 - distributed.scheduler - INFO - Remove client Client-71a994e3-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:59,739 - distributed.scheduler - INFO - Close client connection: Client-71a994e3-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:59,739 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:44925
-2022-08-26 14:09:59,740 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:44999
-2022-08-26 14:09:59,741 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:44999', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:59,741 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:44999
-2022-08-26 14:09:59,741 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-6d512358-04c8-4d15-80b2-31d4e6afe3fe Address tcp://127.0.0.1:44999 Status: Status.closing
-2022-08-26 14:09:59,742 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-891bd949-bf63-4b9e-90a0-c7a3f762eca3 Address tcp://127.0.0.1:44925 Status: Status.closing
-2022-08-26 14:09:59,742 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:44925', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:09:59,742 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:44925
-2022-08-26 14:09:59,742 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:09:59,743 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:09:59,743 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:09:59,956 - distributed.utils_perf - WARNING - full garbage collections took 75% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_rebalance_raises_missing_data3[False] 2022-08-26 14:09:59,963 - distributed.scheduler - INFO - State start
-2022-08-26 14:09:59,964 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:09:59,964 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:36699
-2022-08-26 14:09:59,964 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:45055
-2022-08-26 14:09:59,969 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45749
-2022-08-26 14:09:59,969 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45749
-2022-08-26 14:09:59,969 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:09:59,969 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45435
-2022-08-26 14:09:59,969 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36699
-2022-08-26 14:09:59,969 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:59,969 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:09:59,969 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:59,969 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-8unhv9oa
-2022-08-26 14:09:59,969 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:59,970 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34477
-2022-08-26 14:09:59,970 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34477
-2022-08-26 14:09:59,970 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:09:59,970 - distributed.worker - INFO -          dashboard at:            127.0.0.1:44463
-2022-08-26 14:09:59,970 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36699
-2022-08-26 14:09:59,970 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:59,970 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:09:59,970 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:09:59,970 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-vq7ltzz4
-2022-08-26 14:09:59,971 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:59,973 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45749', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:59,974 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45749
-2022-08-26 14:09:59,974 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:59,974 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:34477', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:09:59,974 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:34477
-2022-08-26 14:09:59,974 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:59,975 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36699
-2022-08-26 14:09:59,975 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:59,975 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36699
-2022-08-26 14:09:59,975 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:09:59,975 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:59,975 - distributed.core - INFO - Starting established connection
-2022-08-26 14:09:59,989 - distributed.scheduler - INFO - Receive client connection: Client-71e0ff18-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:09:59,989 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:00,017 - distributed.worker - WARNING - Could not find data: {'int-aba9cdc96c49f4e4e8eba1b557fcd037': ['tcp://127.0.0.1:45749'], 'int-ace910284a9b4d9b877cf9f8e4fd9770': ['tcp://127.0.0.1:45749'], 'int-bb5361e9e63bc408906ec838d1817eb3': ['tcp://127.0.0.1:45749'], 'int-7ec5d3339274cee5cb507a4e4d28e791': ['tcp://127.0.0.1:45749'], 'int-c56e7bae3484c9b6750417fbf89d6509': ['tcp://127.0.0.1:45749'], 'int-78eff8da8705885135edf00d33d1a882': ['tcp://127.0.0.1:45749'], 'int-1fc4b5db089269ea227402ef9ea18691': ['tcp://127.0.0.1:45749'], 'int-6f0d41ddd500de611c3e5b2e1286adae': ['tcp://127.0.0.1:45749'], 'int-58e78e1b34eb49a68c65b54815d1b158': ['tcp://127.0.0.1:45749'], 'int-1e9eef22eca3c7724d48d6e38ce21d9d': ['tcp://127.0.0.1:45749'], 'int-2059013ff6db5fc2f6d2de1087429dbd': ['tcp://127.0.0.1:45749'], 'int-8d3e19c54d3fb6b7ee03843326025e51': ['tcp://127.0.0.1:45749'], 'int-ea1fa36eb048f89cc9b6b045a2a731d2': ['tcp://127.0.0.1:45749'], 'int-f14df74c6fe0e49005ce8e0b85ccb8cd':
 ['tcp://127.0.0.1:45749'], 'int-ae9efb7483f24954695452618c12690c': ['tcp://127.0.0.1:45749'], 'int-517c6834275f1aa7cd8be9d33eca6516': ['tcp://127.0.0.1:45749'], 'int-5cd9541ea58b401f115b751e79eabbff': ['tcp://127.0.0.1:45749'], 'int-e9e0ac12a38f591e93f20303313b3b98': ['tcp://127.0.0.1:45749'], 'int-ef87721393fff7e37ea7661abb874282': ['tcp://127.0.0.1:45749'], 'int-e19a5800872765e3f77c5fd2be6a7445': ['tcp://127.0.0.1:45749'], 'int-000d1e36243ad422488f838dcef1c3fb': ['tcp://127.0.0.1:45749'], 'int-d0108875ff6c9393709b5c517bac5802': ['tcp://127.0.0.1:45749'], 'int-b9a08a92b9acce11d0d18cff20edc65d': ['tcp://127.0.0.1:45749'], 'int-cce8a24296f12ba75bbd86cc9887f1a0': ['tcp://127.0.0.1:45749'], 'int-8af1e866f46a9f199166f3e516c29547': ['tcp://127.0.0.1:45749'], 'int-e604f137093c92fce4a6bbe923f03b55': ['tcp://127.0.0.1:45749'], 'int-e1757a5f382174989119fb75f8a911ca': ['tcp://127.0.0.1:45749'], 'int-5c1604c6b764a75887b7e1698a7fe58a': ['tcp://127.0.0.1:45749'], 'int-d3395e15f605bc35ab1bac6341a
 285e2': ['tcp://127.0.0.1:45749'], 'int-416ed02830be89512de1ca49a97849a1': ['tcp://127.0.0.1:45749'], 'int-5c8a950061aa331153f4a172bbcbfd1b': ['tcp://127.0.0.1:45749'], 'int-3f30626b8bdbdc8366257c053839456f': ['tcp://127.0.0.1:45749'], 'int-06e5a71c9839bd98760be56f629b24cc': ['tcp://127.0.0.1:45749'], 'int-1c038a6357cd80d62ff8254dd775c7ab': ['tcp://127.0.0.1:45749'], 'int-a4c1e83598ced58b2a814b91704148be': ['tcp://127.0.0.1:45749'], 'int-c92816888d8ccd5e9fecc429cdefa15a': ['tcp://127.0.0.1:45749'], 'int-5b4ec30c09ef0f7eca3753c7f722b654': ['tcp://127.0.0.1:45749'], 'int-6234f530ffb77cc79e4130d405077fe2': ['tcp://127.0.0.1:45749'], 'int-fd7d51b02c55f751b77d142bb5df7d83': ['tcp://127.0.0.1:45749'], 'int-54abf7066a03bddb650fbb1616e27bc5': ['tcp://127.0.0.1:45749'], 'int-6b4d45ad3ec6a6eabf646b7d2e288027': ['tcp://127.0.0.1:45749'], 'int-952289aae0895ad772245a7900072b79': ['tcp://127.0.0.1:45749'], 'int-3801f82ec92c429293f6af7862ec48e5': ['tcp://127.0.0.1:45749'], 'int-ce9a05dd6ec76c6a6d1
 71b0c055f3127': ['tcp://127.0.0.1:45749'], 'int-c0a8a20f903a4915b94db8de3ea63195': ['tcp://127.0.0.1:45749'], 'int-c82c322b89c53ba312b1f9886f5de094': ['tcp://127.0.0.1:45749'], 'int-93d434661371fd077c2a114f1b40b39e': ['tcp://127.0.0.1:45749'], 'int-376b807ff83ad145a7da98f0b8f80f15': ['tcp://127.0.0.1:45749'], 'int-7c5fd0057a3c3a37de5af21c4bc781db': ['tcp://127.0.0.1:45749'], 'int-8e1cf7668e979738658dea286a661b1f': ['tcp://127.0.0.1:45749']} on workers: [] (who_has: {'int-5c8a950061aa331153f4a172bbcbfd1b': ['tcp://127.0.0.1:45749'], 'int-c0a8a20f903a4915b94db8de3ea63195': ['tcp://127.0.0.1:45749'], 'int-58e78e1b34eb49a68c65b54815d1b158': ['tcp://127.0.0.1:45749'], 'int-d3395e15f605bc35ab1bac6341a285e2': ['tcp://127.0.0.1:45749'], 'int-5cd9541ea58b401f115b751e79eabbff': ['tcp://127.0.0.1:45749'], 'int-ce9a05dd6ec76c6a6d171b0c055f3127': ['tcp://127.0.0.1:45749'], 'int-7ec5d3339274cee5cb507a4e4d28e791': ['tcp://127.0.0.1:45749'], 'int-06e5a71c9839bd98760be56f629b24cc': ['tcp://127.0.0.1
 :45749'], 'int-ea1fa36eb048f89cc9b6b045a2a731d2': ['tcp://127.0.0.1:45749'], 'int-c56e7bae3484c9b6750417fbf89d6509': ['tcp://127.0.0.1:45749'], 'int-e19a5800872765e3f77c5fd2be6a7445': ['tcp://127.0.0.1:45749'], 'int-1fc4b5db089269ea227402ef9ea18691': ['tcp://127.0.0.1:45749'], 'int-3f30626b8bdbdc8366257c053839456f': ['tcp://127.0.0.1:45749'], 'int-cce8a24296f12ba75bbd86cc9887f1a0': ['tcp://127.0.0.1:45749'], 'int-5c1604c6b764a75887b7e1698a7fe58a': ['tcp://127.0.0.1:45749'], 'int-e1757a5f382174989119fb75f8a911ca': ['tcp://127.0.0.1:45749'], 'int-8e1cf7668e979738658dea286a661b1f': ['tcp://127.0.0.1:45749'], 'int-aba9cdc96c49f4e4e8eba1b557fcd037': ['tcp://127.0.0.1:45749'], 'int-e9e0ac12a38f591e93f20303313b3b98': ['tcp://127.0.0.1:45749'], 'int-ace910284a9b4d9b877cf9f8e4fd9770': ['tcp://127.0.0.1:45749'], 'int-6234f530ffb77cc79e4130d405077fe2': ['tcp://127.0.0.1:45749'], 'int-54abf7066a03bddb650fbb1616e27bc5': ['tcp://127.0.0.1:45749'], 'int-8af1e866f46a9f199166f3e516c29547': ['tcp://1
 27.0.0.1:45749'], 'int-f14df74c6fe0e49005ce8e0b85ccb8cd': ['tcp://127.0.0.1:45749'], 'int-bb5361e9e63bc408906ec838d1817eb3': ['tcp://127.0.0.1:45749'], 'int-1e9eef22eca3c7724d48d6e38ce21d9d': ['tcp://127.0.0.1:45749'], 'int-7c5fd0057a3c3a37de5af21c4bc781db': ['tcp://127.0.0.1:45749'], 'int-93d434661371fd077c2a114f1b40b39e': ['tcp://127.0.0.1:45749'], 'int-5b4ec30c09ef0f7eca3753c7f722b654': ['tcp://127.0.0.1:45749'], 'int-1c038a6357cd80d62ff8254dd775c7ab': ['tcp://127.0.0.1:45749'], 'int-416ed02830be89512de1ca49a97849a1': ['tcp://127.0.0.1:45749'], 'int-3801f82ec92c429293f6af7862ec48e5': ['tcp://127.0.0.1:45749'], 'int-c92816888d8ccd5e9fecc429cdefa15a': ['tcp://127.0.0.1:45749'], 'int-8d3e19c54d3fb6b7ee03843326025e51': ['tcp://127.0.0.1:45749'], 'int-78eff8da8705885135edf00d33d1a882': ['tcp://127.0.0.1:45749'], 'int-fd7d51b02c55f751b77d142bb5df7d83': ['tcp://127.0.0.1:45749'], 'int-c82c322b89c53ba312b1f9886f5de094': ['tcp://127.0.0.1:45749'], 'int-952289aae0895ad772245a7900072b79': [
 'tcp://127.0.0.1:45749'], 'int-ae9efb7483f24954695452618c12690c': ['tcp://127.0.0.1:45749'], 'int-6f0d41ddd500de611c3e5b2e1286adae': ['tcp://127.0.0.1:45749'], 'int-6b4d45ad3ec6a6eabf646b7d2e288027': ['tcp://127.0.0.1:45749'], 'int-517c6834275f1aa7cd8be9d33eca6516': ['tcp://127.0.0.1:45749'], 'int-b9a08a92b9acce11d0d18cff20edc65d': ['tcp://127.0.0.1:45749'], 'int-2059013ff6db5fc2f6d2de1087429dbd': ['tcp://127.0.0.1:45749'], 'int-000d1e36243ad422488f838dcef1c3fb': ['tcp://127.0.0.1:45749'], 'int-a4c1e83598ced58b2a814b91704148be': ['tcp://127.0.0.1:45749'], 'int-e604f137093c92fce4a6bbe923f03b55': ['tcp://127.0.0.1:45749'], 'int-ef87721393fff7e37ea7661abb874282': ['tcp://127.0.0.1:45749'], 'int-d0108875ff6c9393709b5c517bac5802': ['tcp://127.0.0.1:45749'], 'int-376b807ff83ad145a7da98f0b8f80f15': ['tcp://127.0.0.1:45749']})
-2022-08-26 14:10:00,018 - distributed.scheduler - WARNING - Worker tcp://127.0.0.1:34477 failed to acquire keys: {'int-aba9cdc96c49f4e4e8eba1b557fcd037': ('tcp://127.0.0.1:45749',), 'int-ace910284a9b4d9b877cf9f8e4fd9770': ('tcp://127.0.0.1:45749',), 'int-bb5361e9e63bc408906ec838d1817eb3': ('tcp://127.0.0.1:45749',), 'int-7ec5d3339274cee5cb507a4e4d28e791': ('tcp://127.0.0.1:45749',), 'int-c56e7bae3484c9b6750417fbf89d6509': ('tcp://127.0.0.1:45749',), 'int-78eff8da8705885135edf00d33d1a882': ('tcp://127.0.0.1:45749',), 'int-1fc4b5db089269ea227402ef9ea18691': ('tcp://127.0.0.1:45749',), 'int-6f0d41ddd500de611c3e5b2e1286adae': ('tcp://127.0.0.1:45749',), 'int-58e78e1b34eb49a68c65b54815d1b158': ('tcp://127.0.0.1:45749',), 'int-1e9eef22eca3c7724d48d6e38ce21d9d': ('tcp://127.0.0.1:45749',), 'int-2059013ff6db5fc2f6d2de1087429dbd': ('tcp://127.0.0.1:45749',), 'int-8d3e19c54d3fb6b7ee03843326025e51': ('tcp://127.0.0.1:45749',), 'int-ea1fa36eb048f89cc9b6b045a2a731d2': ('tcp://127.0.0.1:4
5749',), 'int-f14df74c6fe0e49005ce8e0b85ccb8cd': ('tcp://127.0.0.1:45749',), 'int-ae9efb7483f24954695452618c12690c': ('tcp://127.0.0.1:45749',), 'int-517c6834275f1aa7cd8be9d33eca6516': ('tcp://127.0.0.1:45749',), 'int-5cd9541ea58b401f115b751e79eabbff': ('tcp://127.0.0.1:45749',), 'int-e9e0ac12a38f591e93f20303313b3b98': ('tcp://127.0.0.1:45749',), 'int-ef87721393fff7e37ea7661abb874282': ('tcp://127.0.0.1:45749',), 'int-e19a5800872765e3f77c5fd2be6a7445': ('tcp://127.0.0.1:45749',), 'int-000d1e36243ad422488f838dcef1c3fb': ('tcp://127.0.0.1:45749',), 'int-d0108875ff6c9393709b5c517bac5802': ('tcp://127.0.0.1:45749',), 'int-b9a08a92b9acce11d0d18cff20edc65d': ('tcp://127.0.0.1:45749',), 'int-cce8a24296f12ba75bbd86cc9887f1a0': ('tcp://127.0.0.1:45749',), 'int-8af1e866f46a9f199166f3e516c29547': ('tcp://127.0.0.1:45749',), 'int-e604f137093c92fce4a6bbe923f03b55': ('tcp://127.0.0.1:45749',), 'int-e1757a5f382174989119fb75f8a911ca': ('tcp://127.0.0.1:45749',), 'int-5c1604c6b764a75887b7e1698a7fe58a
 ': ('tcp://127.0.0.1:45749',), 'int-d3395e15f605bc35ab1bac6341a285e2': ('tcp://127.0.0.1:45749',), 'int-416ed02830be89512de1ca49a97849a1': ('tcp://127.0.0.1:45749',), 'int-5c8a950061aa331153f4a172bbcbfd1b': ('tcp://127.0.0.1:45749',), 'int-3f30626b8bdbdc8366257c053839456f': ('tcp://127.0.0.1:45749',), 'int-06e5a71c9839bd98760be56f629b24cc': ('tcp://127.0.0.1:45749',), 'int-1c038a6357cd80d62ff8254dd775c7ab': ('tcp://127.0.0.1:45749',), 'int-a4c1e83598ced58b2a814b91704148be': ('tcp://127.0.0.1:45749',), 'int-c92816888d8ccd5e9fecc429cdefa15a': ('tcp://127.0.0.1:45749',), 'int-5b4ec30c09ef0f7eca3753c7f722b654': ('tcp://127.0.0.1:45749',), 'int-6234f530ffb77cc79e4130d405077fe2': ('tcp://127.0.0.1:45749',), 'int-fd7d51b02c55f751b77d142bb5df7d83': ('tcp://127.0.0.1:45749',), 'int-54abf7066a03bddb650fbb1616e27bc5': ('tcp://127.0.0.1:45749',), 'int-6b4d45ad3ec6a6eabf646b7d2e288027': ('tcp://127.0.0.1:45749',), 'int-952289aae0895ad772245a7900072b79': ('tcp://127.0.0.1:45749',), 'int-3801f82ec
 92c429293f6af7862ec48e5': ('tcp://127.0.0.1:45749',), 'int-ce9a05dd6ec76c6a6d171b0c055f3127': ('tcp://127.0.0.1:45749',), 'int-c0a8a20f903a4915b94db8de3ea63195': ('tcp://127.0.0.1:45749',), 'int-c82c322b89c53ba312b1f9886f5de094': ('tcp://127.0.0.1:45749',), 'int-93d434661371fd077c2a114f1b40b39e': ('tcp://127.0.0.1:45749',), 'int-376b807ff83ad145a7da98f0b8f80f15': ('tcp://127.0.0.1:45749',), 'int-7c5fd0057a3c3a37de5af21c4bc781db': ('tcp://127.0.0.1:45749',), 'int-8e1cf7668e979738658dea286a661b1f': ('tcp://127.0.0.1:45749',)}
-2022-08-26 14:10:00,018 - distributed.scheduler - INFO - Remove client Client-71e0ff18-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:00,019 - distributed.scheduler - INFO - Remove client Client-71e0ff18-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:00,019 - distributed.scheduler - INFO - Close client connection: Client-71e0ff18-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:00,019 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:45749
-2022-08-26 14:10:00,020 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:34477
-2022-08-26 14:10:00,021 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:45749', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:10:00,021 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:45749
-2022-08-26 14:10:00,021 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:34477', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:10:00,021 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:34477
-2022-08-26 14:10:00,021 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:10:00,021 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-f156862c-82d3-4b8a-a2f2-ffc027a6f8cc Address tcp://127.0.0.1:45749 Status: Status.closing
-2022-08-26 14:10:00,022 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-bf49c94b-3b77-41d5-a5b0-ed10c91854d7 Address tcp://127.0.0.1:34477 Status: Status.closing
-2022-08-26 14:10:00,023 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:10:00,023 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:10:00,237 - distributed.utils_perf - WARNING - full garbage collections took 74% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_rebalance_raises_missing_data3[True] 2022-08-26 14:10:00,243 - distributed.scheduler - INFO - State start
-2022-08-26 14:10:00,245 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:10:00,245 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:44355
-2022-08-26 14:10:00,245 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:43007
-2022-08-26 14:10:00,250 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37249
-2022-08-26 14:10:00,250 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37249
-2022-08-26 14:10:00,250 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:10:00,250 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39703
-2022-08-26 14:10:00,250 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44355
-2022-08-26 14:10:00,250 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:00,250 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:10:00,250 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:10:00,250 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-73fhzske
-2022-08-26 14:10:00,250 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:00,251 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38087
-2022-08-26 14:10:00,251 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38087
-2022-08-26 14:10:00,251 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:10:00,251 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42825
-2022-08-26 14:10:00,251 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44355
-2022-08-26 14:10:00,251 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:00,251 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:10:00,251 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:10:00,251 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-a802afma
-2022-08-26 14:10:00,252 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:00,255 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37249', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:10:00,255 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37249
-2022-08-26 14:10:00,255 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:00,255 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38087', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:10:00,256 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38087
-2022-08-26 14:10:00,256 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:00,256 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44355
-2022-08-26 14:10:00,256 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:00,257 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44355
-2022-08-26 14:10:00,257 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:00,257 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:00,257 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:00,271 - distributed.scheduler - INFO - Receive client connection: Client-720bf633-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:00,272 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:00,285 - distributed.scheduler - INFO - Remove client Client-720bf633-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:00,286 - distributed.scheduler - INFO - Remove client Client-720bf633-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:00,287 - distributed.scheduler - INFO - Close client connection: Client-720bf633-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:00,296 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:37249
-2022-08-26 14:10:00,296 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:38087
-2022-08-26 14:10:00,297 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:37249', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:10:00,297 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:37249
-2022-08-26 14:10:00,298 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:38087', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:10:00,298 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:38087
-2022-08-26 14:10:00,298 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:10:00,298 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-c1239ec1-0e84-4712-83a3-98168acbc2b3 Address tcp://127.0.0.1:37249 Status: Status.closing
-2022-08-26 14:10:00,298 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-90e7e07f-e8ca-4627-b16c-5b56cf6fed04 Address tcp://127.0.0.1:38087 Status: Status.closing
-2022-08-26 14:10:00,299 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:10:00,299 - distributed.scheduler - INFO - Scheduler closing all comms
-XFAIL
-distributed/tests/test_scheduler.py::test_rebalance_no_workers 2022-08-26 14:10:00,376 - distributed.scheduler - INFO - State start
-2022-08-26 14:10:00,378 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:10:00,378 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:46381
-2022-08-26 14:10:00,378 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:34119
-2022-08-26 14:10:00,379 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:10:00,379 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:10:00,595 - distributed.utils_perf - WARNING - full garbage collections took 76% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_rebalance_no_limit 2022-08-26 14:10:00,600 - distributed.scheduler - INFO - State start
-2022-08-26 14:10:00,602 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:10:00,602 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:35549
-2022-08-26 14:10:00,602 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:33929
-2022-08-26 14:10:00,606 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:44397
-2022-08-26 14:10:00,606 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:44397
-2022-08-26 14:10:00,606 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:10:00,606 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36067
-2022-08-26 14:10:00,606 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35549
-2022-08-26 14:10:00,607 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:00,607 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:10:00,607 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-pew85fhe
-2022-08-26 14:10:00,607 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:00,607 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36757
-2022-08-26 14:10:00,607 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36757
-2022-08-26 14:10:00,607 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:10:00,607 - distributed.worker - INFO -          dashboard at:            127.0.0.1:38775
-2022-08-26 14:10:00,607 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35549
-2022-08-26 14:10:00,607 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:00,608 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:10:00,608 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-zdvoso62
-2022-08-26 14:10:00,608 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:00,610 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:44397', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:10:00,611 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:44397
-2022-08-26 14:10:00,611 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:00,611 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36757', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:10:00,611 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36757
-2022-08-26 14:10:00,611 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:00,612 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35549
-2022-08-26 14:10:00,612 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:00,612 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35549
-2022-08-26 14:10:00,612 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:00,612 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:00,612 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:00,626 - distributed.scheduler - INFO - Receive client connection: Client-72422e1f-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:00,626 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:00,659 - distributed.scheduler - INFO - Remove client Client-72422e1f-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:00,659 - distributed.scheduler - INFO - Remove client Client-72422e1f-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:00,660 - distributed.scheduler - INFO - Close client connection: Client-72422e1f-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:00,660 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:44397
-2022-08-26 14:10:00,660 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36757
-2022-08-26 14:10:00,661 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:44397', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:10:00,661 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:44397
-2022-08-26 14:10:00,662 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:36757', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:10:00,662 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:36757
-2022-08-26 14:10:00,662 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:10:00,662 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-5857365e-fa77-4f49-8ed8-6b5ec5233cf5 Address tcp://127.0.0.1:44397 Status: Status.closing
-2022-08-26 14:10:00,662 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-4db838b1-72b3-43b8-bb2b-885ccec92390 Address tcp://127.0.0.1:36757 Status: Status.closing
-2022-08-26 14:10:00,663 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:10:00,663 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:10:00,876 - distributed.utils_perf - WARNING - full garbage collections took 76% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_rebalance_no_recipients 2022-08-26 14:10:00,883 - distributed.scheduler - INFO - State start
-2022-08-26 14:10:00,884 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:10:00,884 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:44829
-2022-08-26 14:10:00,884 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:32925
-2022-08-26 14:10:00,890 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:34431'
-2022-08-26 14:10:00,890 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:36363'
-2022-08-26 14:10:01,590 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40373
-2022-08-26 14:10:01,590 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40373
-2022-08-26 14:10:01,590 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:10:01,590 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34097
-2022-08-26 14:10:01,590 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44829
-2022-08-26 14:10:01,590 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:01,590 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:10:01,590 - distributed.worker - INFO -                Memory:                   0.98 GiB
-2022-08-26 14:10:01,590 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-lh_ufeyd
-2022-08-26 14:10:01,590 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:01,592 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37519
-2022-08-26 14:10:01,592 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37519
-2022-08-26 14:10:01,592 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:10:01,592 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33609
-2022-08-26 14:10:01,592 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44829
-2022-08-26 14:10:01,592 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:01,592 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:10:01,592 - distributed.worker - INFO -                Memory:                   0.98 GiB
-2022-08-26 14:10:01,592 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-iawvbd2b
-2022-08-26 14:10:01,592 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:01,868 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37519', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:10:01,869 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37519
-2022-08-26 14:10:01,869 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:01,869 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44829
-2022-08-26 14:10:01,869 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:01,869 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:01,887 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40373', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:10:01,887 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40373
-2022-08-26 14:10:01,887 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:01,887 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44829
-2022-08-26 14:10:01,888 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:01,888 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:01,933 - distributed.scheduler - INFO - Receive client connection: Client-73099274-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:01,933 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:02,903 - distributed.worker - INFO - Run out-of-band function 'lambda'
-2022-08-26 14:10:02,903 - distributed.worker - INFO - Run out-of-band function 'lambda'
-2022-08-26 14:10:02,905 - distributed.worker - INFO - Run out-of-band function 'lambda'
-2022-08-26 14:10:02,905 - distributed.worker - INFO - Run out-of-band function 'lambda'
-2022-08-26 14:10:02,924 - distributed.scheduler - INFO - Remove client Client-73099274-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:02,925 - distributed.scheduler - INFO - Remove client Client-73099274-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:02,925 - distributed.scheduler - INFO - Close client connection: Client-73099274-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:02,925 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:34431'.
-2022-08-26 14:10:02,925 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:10:02,925 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:36363'.
-2022-08-26 14:10:02,926 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:10:02,937 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:37519
-2022-08-26 14:10:02,938 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-586607f9-c840-4508-8ca7-006d51ef3b7d Address tcp://127.0.0.1:37519 Status: Status.closing
-2022-08-26 14:10:02,938 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:37519', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:10:02,938 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:37519
-2022-08-26 14:10:02,955 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:40373
-2022-08-26 14:10:02,956 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-c9a22664-c30e-425d-a784-1f5763553cf6 Address tcp://127.0.0.1:40373 Status: Status.closing
-2022-08-26 14:10:02,957 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:40373', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:10:02,957 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:40373
-2022-08-26 14:10:02,957 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:10:03,092 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:10:03,092 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:10:03,307 - distributed.utils_perf - WARNING - full garbage collections took 75% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_rebalance_skip_recipient 2022-08-26 14:10:03,313 - distributed.scheduler - INFO - State start
-2022-08-26 14:10:03,314 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:10:03,314 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:38633
-2022-08-26 14:10:03,314 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:34117
-2022-08-26 14:10:03,320 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38813
-2022-08-26 14:10:03,321 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38813
-2022-08-26 14:10:03,321 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:10:03,321 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35339
-2022-08-26 14:10:03,321 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:38633
-2022-08-26 14:10:03,321 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:03,321 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:10:03,321 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-2v7pdmq0
-2022-08-26 14:10:03,321 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:03,321 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36875
-2022-08-26 14:10:03,322 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36875
-2022-08-26 14:10:03,322 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:10:03,322 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42183
-2022-08-26 14:10:03,322 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:38633
-2022-08-26 14:10:03,322 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:03,322 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:10:03,322 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-t2cm3r84
-2022-08-26 14:10:03,322 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:03,322 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45961
-2022-08-26 14:10:03,322 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45961
-2022-08-26 14:10:03,322 - distributed.worker - INFO -           Worker name:                          2
-2022-08-26 14:10:03,323 - distributed.worker - INFO -          dashboard at:            127.0.0.1:32839
-2022-08-26 14:10:03,323 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:38633
-2022-08-26 14:10:03,323 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:03,323 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:10:03,323 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-893z9vnc
-2022-08-26 14:10:03,323 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:03,326 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38813', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:10:03,327 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38813
-2022-08-26 14:10:03,327 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:03,327 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36875', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:10:03,327 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36875
-2022-08-26 14:10:03,327 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:03,328 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45961', name: 2, status: init, memory: 0, processing: 0>
-2022-08-26 14:10:03,328 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45961
-2022-08-26 14:10:03,328 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:03,328 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:38633
-2022-08-26 14:10:03,328 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:03,329 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:38633
-2022-08-26 14:10:03,329 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:03,329 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:38633
-2022-08-26 14:10:03,329 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:03,329 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:03,329 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:03,329 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:03,343 - distributed.scheduler - INFO - Receive client connection: Client-73e0cd5d-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:03,344 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:03,369 - distributed.scheduler - INFO - Remove client Client-73e0cd5d-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:03,369 - distributed.scheduler - INFO - Remove client Client-73e0cd5d-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:03,369 - distributed.scheduler - INFO - Close client connection: Client-73e0cd5d-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:03,370 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:38813
-2022-08-26 14:10:03,370 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36875
-2022-08-26 14:10:03,370 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:45961
-2022-08-26 14:10:03,371 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:38813', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:10:03,371 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:38813
-2022-08-26 14:10:03,372 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:36875', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:10:03,372 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:36875
-2022-08-26 14:10:03,372 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:45961', name: 2, status: closing, memory: 0, processing: 0>
-2022-08-26 14:10:03,372 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:45961
-2022-08-26 14:10:03,372 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:10:03,372 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-ed85e8c0-406f-46f4-9f37-0d008266da7d Address tcp://127.0.0.1:38813 Status: Status.closing
-2022-08-26 14:10:03,372 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-4033dbae-0904-4a16-9a2e-9e67750dfa2b Address tcp://127.0.0.1:36875 Status: Status.closing
-2022-08-26 14:10:03,373 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-40f8b20f-cb98-4da8-ab8a-c69347951bcb Address tcp://127.0.0.1:45961 Status: Status.closing
-2022-08-26 14:10:03,374 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:10:03,374 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:10:03,588 - distributed.utils_perf - WARNING - full garbage collections took 75% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_rebalance_skip_all_recipients 2022-08-26 14:10:03,594 - distributed.scheduler - INFO - State start
-2022-08-26 14:10:03,596 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:10:03,596 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:36787
-2022-08-26 14:10:03,596 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:37469
-2022-08-26 14:10:03,601 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34371
-2022-08-26 14:10:03,601 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34371
-2022-08-26 14:10:03,601 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:10:03,601 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34319
-2022-08-26 14:10:03,601 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36787
-2022-08-26 14:10:03,601 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:03,601 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:10:03,601 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-trx2jj06
-2022-08-26 14:10:03,601 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:03,602 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40581
-2022-08-26 14:10:03,602 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40581
-2022-08-26 14:10:03,602 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:10:03,602 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39577
-2022-08-26 14:10:03,602 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36787
-2022-08-26 14:10:03,602 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:03,602 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:10:03,602 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-glg0hhxm
-2022-08-26 14:10:03,602 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:03,605 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:34371', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:10:03,605 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:34371
-2022-08-26 14:10:03,605 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:03,606 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40581', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:10:03,606 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40581
-2022-08-26 14:10:03,606 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:03,606 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36787
-2022-08-26 14:10:03,606 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:03,606 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36787
-2022-08-26 14:10:03,606 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:03,607 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:03,607 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:03,620 - distributed.scheduler - INFO - Receive client connection: Client-740b18a1-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:03,621 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:03,643 - distributed.scheduler - INFO - Remove client Client-740b18a1-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:03,643 - distributed.scheduler - INFO - Remove client Client-740b18a1-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:03,643 - distributed.scheduler - INFO - Close client connection: Client-740b18a1-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:03,644 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:34371
-2022-08-26 14:10:03,644 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:40581
-2022-08-26 14:10:03,645 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:34371', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:10:03,645 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:34371
-2022-08-26 14:10:03,645 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:40581', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:10:03,645 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:40581
-2022-08-26 14:10:03,645 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:10:03,645 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-0d80e550-ff61-4ca7-b514-e91998a14c93 Address tcp://127.0.0.1:34371 Status: Status.closing
-2022-08-26 14:10:03,646 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-e5b8cddd-b0f3-4861-8ad3-0525d6b78929 Address tcp://127.0.0.1:40581 Status: Status.closing
-2022-08-26 14:10:03,647 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:10:03,647 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:10:03,860 - distributed.utils_perf - WARNING - full garbage collections took 75% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_rebalance_sender_below_mean 2022-08-26 14:10:03,866 - distributed.scheduler - INFO - State start
-2022-08-26 14:10:03,867 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:10:03,868 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:37619
-2022-08-26 14:10:03,868 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:41177
-2022-08-26 14:10:03,873 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:33317'
-2022-08-26 14:10:03,873 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:40939'
-2022-08-26 14:10:04,571 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37847
-2022-08-26 14:10:04,571 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37847
-2022-08-26 14:10:04,571 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:10:04,571 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43875
-2022-08-26 14:10:04,571 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37619
-2022-08-26 14:10:04,571 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:04,571 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:10:04,571 - distributed.worker - INFO -                Memory:                   0.98 GiB
-2022-08-26 14:10:04,571 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ixtl0lu5
-2022-08-26 14:10:04,571 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:04,582 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38233
-2022-08-26 14:10:04,582 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38233
-2022-08-26 14:10:04,582 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:10:04,582 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40229
-2022-08-26 14:10:04,582 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37619
-2022-08-26 14:10:04,582 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:04,582 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:10:04,582 - distributed.worker - INFO -                Memory:                   0.98 GiB
-2022-08-26 14:10:04,583 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ddifx9k9
-2022-08-26 14:10:04,583 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:04,860 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38233', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:10:04,860 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38233
-2022-08-26 14:10:04,860 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:04,860 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37619
-2022-08-26 14:10:04,860 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:04,861 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:04,870 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37847', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:10:04,870 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37847
-2022-08-26 14:10:04,870 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:04,871 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37619
-2022-08-26 14:10:04,871 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:04,871 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:04,917 - distributed.scheduler - INFO - Receive client connection: Client-74d0d44d-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:04,917 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:05,883 - distributed.worker - INFO - Run out-of-band function 'lambda'
-2022-08-26 14:10:05,884 - distributed.worker - INFO - Run out-of-band function 'lambda'
-2022-08-26 14:10:05,913 - distributed.scheduler - INFO - Remove client Client-74d0d44d-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:05,913 - distributed.scheduler - INFO - Remove client Client-74d0d44d-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:05,913 - distributed.scheduler - INFO - Close client connection: Client-74d0d44d-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:05,915 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:33317'.
-2022-08-26 14:10:05,915 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:10:05,915 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:40939'.
-2022-08-26 14:10:05,915 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:10:05,915 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:37847
-2022-08-26 14:10:05,916 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:38233
-2022-08-26 14:10:05,916 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-f81444f0-f1b0-4d6a-a420-93229d82dd54 Address tcp://127.0.0.1:37847 Status: Status.closing
-2022-08-26 14:10:05,916 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:37847', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:10:05,916 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:37847
-2022-08-26 14:10:05,916 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-a1aa9eb4-43e6-4e70-969c-9dadd303191f Address tcp://127.0.0.1:38233 Status: Status.closing
-2022-08-26 14:10:05,917 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:38233', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:10:05,917 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:38233
-2022-08-26 14:10:05,917 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:10:06,081 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:10:06,081 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:10:06,296 - distributed.utils_perf - WARNING - full garbage collections took 75% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_rebalance_least_recently_inserted_sender_min 2022-08-26 14:10:06,301 - distributed.scheduler - INFO - State start
-2022-08-26 14:10:06,303 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:10:06,303 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:38843
-2022-08-26 14:10:06,303 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:43493
-2022-08-26 14:10:06,308 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:33779'
-2022-08-26 14:10:06,309 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:36429'
-2022-08-26 14:10:07,013 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39923
-2022-08-26 14:10:07,013 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39923
-2022-08-26 14:10:07,013 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:10:07,013 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34719
-2022-08-26 14:10:07,013 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:38843
-2022-08-26 14:10:07,013 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:07,013 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:10:07,013 - distributed.worker - INFO -                Memory:                   0.98 GiB
-2022-08-26 14:10:07,013 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-jnzw4qym
-2022-08-26 14:10:07,013 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:07,019 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42843
-2022-08-26 14:10:07,019 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42843
-2022-08-26 14:10:07,019 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:10:07,019 - distributed.worker - INFO -          dashboard at:            127.0.0.1:44001
-2022-08-26 14:10:07,019 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:38843
-2022-08-26 14:10:07,019 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:07,019 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:10:07,020 - distributed.worker - INFO -                Memory:                   0.98 GiB
-2022-08-26 14:10:07,020 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-yt77_oaq
-2022-08-26 14:10:07,020 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:07,298 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42843', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:10:07,298 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42843
-2022-08-26 14:10:07,298 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:07,298 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:38843
-2022-08-26 14:10:07,298 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:07,299 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:07,307 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39923', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:10:07,307 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39923
-2022-08-26 14:10:07,307 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:07,307 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:38843
-2022-08-26 14:10:07,308 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:07,308 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:07,351 - distributed.scheduler - INFO - Receive client connection: Client-76443802-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:07,351 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:07,373 - distributed.worker - INFO - Run out-of-band function 'lambda'
-2022-08-26 14:10:07,373 - distributed.worker - INFO - Run out-of-band function 'lambda'
-2022-08-26 14:10:07,375 - distributed.worker - INFO - Run out-of-band function 'lambda'
-2022-08-26 14:10:07,375 - distributed.worker - INFO - Run out-of-band function 'lambda'
-2022-08-26 14:10:08,322 - distributed.worker - INFO - Run out-of-band function 'lambda'
-2022-08-26 14:10:08,322 - distributed.worker - INFO - Run out-of-band function 'lambda'
-2022-08-26 14:10:08,329 - distributed.worker - INFO - Run out-of-band function 'lambda'
-2022-08-26 14:10:08,329 - distributed.worker - INFO - Run out-of-band function 'lambda'
-2022-08-26 14:10:08,341 - distributed.scheduler - INFO - Remove client Client-76443802-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:08,341 - distributed.scheduler - INFO - Remove client Client-76443802-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:08,341 - distributed.scheduler - INFO - Close client connection: Client-76443802-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:08,342 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:33779'.
-2022-08-26 14:10:08,342 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:10:08,342 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:36429'.
-2022-08-26 14:10:08,342 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:10:08,342 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42843
-2022-08-26 14:10:08,343 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:39923
-2022-08-26 14:10:08,343 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-13495dc0-4a36-4bef-8a43-a25cea2e4a51 Address tcp://127.0.0.1:42843 Status: Status.closing
-2022-08-26 14:10:08,343 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:42843', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:10:08,344 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:42843
-2022-08-26 14:10:08,344 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-11d238cc-4932-4976-8214-4ef43ae46752 Address tcp://127.0.0.1:39923 Status: Status.closing
-2022-08-26 14:10:08,344 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:39923', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:10:08,344 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:39923
-2022-08-26 14:10:08,344 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:10:08,473 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:10:08,473 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:10:08,686 - distributed.utils_perf - WARNING - full garbage collections took 75% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_gather_on_worker 2022-08-26 14:10:08,691 - distributed.scheduler - INFO - State start
-2022-08-26 14:10:08,693 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:10:08,693 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:45677
-2022-08-26 14:10:08,693 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:36655
-2022-08-26 14:10:08,698 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33439
-2022-08-26 14:10:08,698 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33439
-2022-08-26 14:10:08,698 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:10:08,698 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40495
-2022-08-26 14:10:08,698 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:45677
-2022-08-26 14:10:08,698 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:08,698 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:10:08,698 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:10:08,698 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-kdhj_ene
-2022-08-26 14:10:08,698 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:08,699 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42093
-2022-08-26 14:10:08,699 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42093
-2022-08-26 14:10:08,699 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:10:08,699 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34757
-2022-08-26 14:10:08,699 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:45677
-2022-08-26 14:10:08,699 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:08,699 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:10:08,699 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:10:08,699 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-s9udh77m
-2022-08-26 14:10:08,699 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:08,702 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33439', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:10:08,702 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33439
-2022-08-26 14:10:08,702 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:08,703 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42093', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:10:08,703 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42093
-2022-08-26 14:10:08,703 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:08,703 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:45677
-2022-08-26 14:10:08,704 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:08,704 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:45677
-2022-08-26 14:10:08,704 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:08,704 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:08,704 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:08,718 - distributed.scheduler - INFO - Receive client connection: Client-7714e446-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:08,718 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:08,729 - distributed.scheduler - INFO - Remove client Client-7714e446-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:08,729 - distributed.scheduler - INFO - Remove client Client-7714e446-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:08,730 - distributed.scheduler - INFO - Close client connection: Client-7714e446-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:08,731 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:33439
-2022-08-26 14:10:08,731 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42093
-2022-08-26 14:10:08,732 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-a8f2fde6-6225-44f7-84d5-02362ccdb783 Address tcp://127.0.0.1:33439 Status: Status.closing
-2022-08-26 14:10:08,732 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-3c06effd-3eea-47ed-acf7-46a9d1423650 Address tcp://127.0.0.1:42093 Status: Status.closing
-2022-08-26 14:10:08,733 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:33439', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:10:08,733 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:33439
-2022-08-26 14:10:08,733 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:42093', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:10:08,733 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:42093
-2022-08-26 14:10:08,733 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:10:08,734 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:10:08,734 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:10:08,947 - distributed.utils_perf - WARNING - full garbage collections took 75% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_gather_on_worker_bad_recipient 2022-08-26 14:10:08,952 - distributed.scheduler - INFO - State start
-2022-08-26 14:10:08,954 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:10:08,954 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:34469
-2022-08-26 14:10:08,954 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:39275
-2022-08-26 14:10:08,959 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34141
-2022-08-26 14:10:08,959 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34141
-2022-08-26 14:10:08,959 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:10:08,959 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42583
-2022-08-26 14:10:08,959 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34469
-2022-08-26 14:10:08,959 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:08,959 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:10:08,959 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:10:08,959 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-pwj9hnif
-2022-08-26 14:10:08,959 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:08,960 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36735
-2022-08-26 14:10:08,960 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36735
-2022-08-26 14:10:08,960 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:10:08,960 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41005
-2022-08-26 14:10:08,960 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34469
-2022-08-26 14:10:08,960 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:08,960 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:10:08,960 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:10:08,960 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-_3qnpecv
-2022-08-26 14:10:08,960 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:08,963 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:34141', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:10:08,964 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:34141
-2022-08-26 14:10:08,964 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:08,964 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36735', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:10:08,964 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36735
-2022-08-26 14:10:08,964 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:08,965 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34469
-2022-08-26 14:10:08,965 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:08,965 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34469
-2022-08-26 14:10:08,965 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:08,965 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:08,965 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:08,979 - distributed.scheduler - INFO - Receive client connection: Client-773cc05a-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:08,979 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:08,983 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36735
-2022-08-26 14:10:08,983 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:36735', name: 1, status: closing, memory: 1, processing: 0>
-2022-08-26 14:10:08,983 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:36735
-2022-08-26 14:10:08,984 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-c4ac5fac-5ac0-4396-af95-685400a41151 Address tcp://127.0.0.1:36735 Status: Status.closing
-2022-08-26 14:10:09,085 - distributed.scheduler - WARNING - Communication with worker tcp://127.0.0.1:36735 failed during replication: OSError: Timed out trying to connect to tcp://127.0.0.1:36735 after 0.1 s
-2022-08-26 14:10:09,096 - distributed.scheduler - INFO - Remove client Client-773cc05a-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:09,097 - distributed.scheduler - INFO - Remove client Client-773cc05a-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:09,097 - distributed.scheduler - INFO - Close client connection: Client-773cc05a-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:09,097 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:34141
-2022-08-26 14:10:09,098 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:34141', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:10:09,098 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:34141
-2022-08-26 14:10:09,098 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:10:09,098 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-fffd07bb-6d7b-4c6c-82fa-75ef23080802 Address tcp://127.0.0.1:34141 Status: Status.closing
-2022-08-26 14:10:09,099 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:10:09,099 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:10:09,312 - distributed.utils_perf - WARNING - full garbage collections took 75% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_gather_on_worker_bad_sender 2022-08-26 14:10:09,317 - distributed.scheduler - INFO - State start
-2022-08-26 14:10:09,319 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:10:09,319 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:43213
-2022-08-26 14:10:09,319 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:46161
-2022-08-26 14:10:09,324 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35395
-2022-08-26 14:10:09,324 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35395
-2022-08-26 14:10:09,324 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:10:09,324 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43895
-2022-08-26 14:10:09,324 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43213
-2022-08-26 14:10:09,324 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:09,324 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:10:09,324 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:10:09,324 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-aaaafxqf
-2022-08-26 14:10:09,324 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:09,325 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35641
-2022-08-26 14:10:09,325 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35641
-2022-08-26 14:10:09,325 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:10:09,325 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36017
-2022-08-26 14:10:09,325 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43213
-2022-08-26 14:10:09,325 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:09,325 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:10:09,325 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:10:09,325 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-mw67x99m
-2022-08-26 14:10:09,325 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:09,328 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:35395', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:10:09,328 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:35395
-2022-08-26 14:10:09,328 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:09,329 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:35641', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:10:09,329 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:35641
-2022-08-26 14:10:09,329 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:09,329 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43213
-2022-08-26 14:10:09,329 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:09,330 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43213
-2022-08-26 14:10:09,330 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:09,330 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:09,330 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:09,344 - distributed.scheduler - INFO - Receive client connection: Client-777465a3-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:09,344 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:09,446 - distributed.worker - WARNING - Could not find data: {'x': ['tcp://127.0.0.1:12345']} on workers: ['tcp://127.0.0.1:12345'] (who_has: {'x': ['tcp://127.0.0.1:12345']})
-2022-08-26 14:10:09,446 - distributed.scheduler - WARNING - Worker tcp://127.0.0.1:35395 failed to acquire keys: {'x': ('tcp://127.0.0.1:12345',)}
-2022-08-26 14:10:09,447 - distributed.scheduler - INFO - Remove client Client-777465a3-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:09,447 - distributed.scheduler - INFO - Remove client Client-777465a3-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:09,447 - distributed.scheduler - INFO - Close client connection: Client-777465a3-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:09,448 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:35395
-2022-08-26 14:10:09,448 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:35641
-2022-08-26 14:10:09,449 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:35395', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:10:09,449 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:35395
-2022-08-26 14:10:09,449 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:35641', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:10:09,449 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:35641
-2022-08-26 14:10:09,449 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:10:09,449 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-81ec31f1-354e-4364-b6e8-3f2348e7aa08 Address tcp://127.0.0.1:35395 Status: Status.closing
-2022-08-26 14:10:09,450 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-667759ec-db19-4e80-8a49-030f59c93b2c Address tcp://127.0.0.1:35641 Status: Status.closing
-2022-08-26 14:10:09,451 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:10:09,451 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:10:09,663 - distributed.utils_perf - WARNING - full garbage collections took 75% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_gather_on_worker_bad_sender_replicated[False] 2022-08-26 14:10:09,669 - distributed.scheduler - INFO - State start
-2022-08-26 14:10:09,671 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:10:09,671 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:33479
-2022-08-26 14:10:09,671 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:37805
-2022-08-26 14:10:09,676 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35665
-2022-08-26 14:10:09,676 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35665
-2022-08-26 14:10:09,676 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:10:09,676 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34213
-2022-08-26 14:10:09,676 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33479
-2022-08-26 14:10:09,676 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:09,676 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:10:09,676 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:10:09,676 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-f8g5u1_i
-2022-08-26 14:10:09,676 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:09,677 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36443
-2022-08-26 14:10:09,677 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36443
-2022-08-26 14:10:09,677 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:10:09,677 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35047
-2022-08-26 14:10:09,677 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33479
-2022-08-26 14:10:09,677 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:09,677 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:10:09,677 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:10:09,677 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-d_cy9s6r
-2022-08-26 14:10:09,677 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:09,680 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:35665', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:10:09,680 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:35665
-2022-08-26 14:10:09,680 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:09,681 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36443', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:10:09,681 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36443
-2022-08-26 14:10:09,681 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:09,681 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33479
-2022-08-26 14:10:09,681 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:09,682 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33479
-2022-08-26 14:10:09,682 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:09,682 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:09,682 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:09,696 - distributed.scheduler - INFO - Receive client connection: Client-77aa1dc0-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:09,696 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:09,707 - distributed.scheduler - INFO - Remove client Client-77aa1dc0-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:09,707 - distributed.scheduler - INFO - Remove client Client-77aa1dc0-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:09,707 - distributed.scheduler - INFO - Close client connection: Client-77aa1dc0-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:09,709 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:35665
-2022-08-26 14:10:09,709 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36443
-2022-08-26 14:10:09,710 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-11763f9f-4291-4127-9538-6646b25604de Address tcp://127.0.0.1:35665 Status: Status.closing
-2022-08-26 14:10:09,710 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-ee74cfb5-d1a6-4954-8291-528aa2f1798c Address tcp://127.0.0.1:36443 Status: Status.closing
-2022-08-26 14:10:09,710 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:35665', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:10:09,711 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:35665
-2022-08-26 14:10:09,711 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:36443', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:10:09,711 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:36443
-2022-08-26 14:10:09,711 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:10:09,712 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:10:09,712 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:10:09,925 - distributed.utils_perf - WARNING - full garbage collections took 75% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_gather_on_worker_bad_sender_replicated[True] 2022-08-26 14:10:09,931 - distributed.scheduler - INFO - State start
-2022-08-26 14:10:09,932 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:10:09,932 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:41149
-2022-08-26 14:10:09,932 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:43255
-2022-08-26 14:10:09,937 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34117
-2022-08-26 14:10:09,937 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34117
-2022-08-26 14:10:09,937 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:10:09,937 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34633
-2022-08-26 14:10:09,937 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:41149
-2022-08-26 14:10:09,937 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:09,937 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:10:09,937 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:10:09,937 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-mq4fbhot
-2022-08-26 14:10:09,937 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:09,938 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36393
-2022-08-26 14:10:09,938 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36393
-2022-08-26 14:10:09,938 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:10:09,938 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34301
-2022-08-26 14:10:09,938 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:41149
-2022-08-26 14:10:09,938 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:09,938 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:10:09,938 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:10:09,938 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-pp3mcvfk
-2022-08-26 14:10:09,938 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:09,941 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:34117', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:10:09,942 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:34117
-2022-08-26 14:10:09,942 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:09,942 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36393', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:10:09,942 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36393
-2022-08-26 14:10:09,942 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:09,943 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:41149
-2022-08-26 14:10:09,943 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:09,943 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:41149
-2022-08-26 14:10:09,943 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:09,943 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:09,943 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:09,957 - distributed.scheduler - INFO - Receive client connection: Client-77d1fc34-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:09,957 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:10,075 - distributed.scheduler - INFO - Remove client Client-77d1fc34-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:10,075 - distributed.scheduler - INFO - Remove client Client-77d1fc34-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:10,076 - distributed.scheduler - INFO - Close client connection: Client-77d1fc34-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:10,076 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:34117
-2022-08-26 14:10:10,076 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36393
-2022-08-26 14:10:10,077 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:34117', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:10:10,077 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:34117
-2022-08-26 14:10:10,078 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:36393', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:10:10,078 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:36393
-2022-08-26 14:10:10,078 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:10:10,078 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-dd2ab3b4-b180-4edf-9d3f-f8e566c19316 Address tcp://127.0.0.1:34117 Status: Status.closing
-2022-08-26 14:10:10,078 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-5dc21213-f11d-4245-8fbd-54ef41cbca98 Address tcp://127.0.0.1:36393 Status: Status.closing
-2022-08-26 14:10:10,079 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:10:10,079 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:10:10,292 - distributed.utils_perf - WARNING - full garbage collections took 75% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_gather_on_worker_key_not_on_sender 2022-08-26 14:10:10,298 - distributed.scheduler - INFO - State start
-2022-08-26 14:10:10,300 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:10:10,300 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:36079
-2022-08-26 14:10:10,300 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:39687
-2022-08-26 14:10:10,304 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43989
-2022-08-26 14:10:10,304 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43989
-2022-08-26 14:10:10,305 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:10:10,305 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34159
-2022-08-26 14:10:10,305 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36079
-2022-08-26 14:10:10,305 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:10,305 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:10:10,305 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:10:10,305 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-_bjf427y
-2022-08-26 14:10:10,305 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:10,305 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33423
-2022-08-26 14:10:10,306 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33423
-2022-08-26 14:10:10,306 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:10:10,306 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45345
-2022-08-26 14:10:10,306 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36079
-2022-08-26 14:10:10,306 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:10,306 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:10:10,306 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:10:10,306 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-h6fgn9no
-2022-08-26 14:10:10,306 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:10,309 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:43989', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:10:10,309 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:43989
-2022-08-26 14:10:10,309 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:10,310 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33423', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:10:10,310 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33423
-2022-08-26 14:10:10,310 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:10,310 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36079
-2022-08-26 14:10:10,310 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:10,311 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36079
-2022-08-26 14:10:10,311 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:10,311 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:10,311 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:10,325 - distributed.scheduler - INFO - Receive client connection: Client-780a1248-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:10,325 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:10,328 - distributed.worker - WARNING - Could not find data: {'x': ['tcp://127.0.0.1:33423']} on workers: [] (who_has: {'x': ['tcp://127.0.0.1:33423']})
-2022-08-26 14:10:10,328 - distributed.scheduler - WARNING - Worker tcp://127.0.0.1:43989 failed to acquire keys: {'x': ('tcp://127.0.0.1:33423',)}
-2022-08-26 14:10:10,336 - distributed.scheduler - INFO - Remove client Client-780a1248-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:10,336 - distributed.scheduler - INFO - Remove client Client-780a1248-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:10,336 - distributed.scheduler - INFO - Close client connection: Client-780a1248-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:10,337 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:43989
-2022-08-26 14:10:10,337 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:33423
-2022-08-26 14:10:10,338 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:43989', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:10:10,338 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:43989
-2022-08-26 14:10:10,338 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:33423', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:10:10,338 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:33423
-2022-08-26 14:10:10,338 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:10:10,339 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-f2cbf642-6279-44ad-ad92-586899c00f1e Address tcp://127.0.0.1:43989 Status: Status.closing
-2022-08-26 14:10:10,339 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-1e727711-84b7-40a0-9f88-03c187e6083d Address tcp://127.0.0.1:33423 Status: Status.closing
-2022-08-26 14:10:10,340 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:10:10,340 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:10:10,554 - distributed.utils_perf - WARNING - full garbage collections took 76% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_gather_on_worker_key_not_on_sender_replicated[False] 2022-08-26 14:10:10,560 - distributed.scheduler - INFO - State start
-2022-08-26 14:10:10,561 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:10:10,562 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:34077
-2022-08-26 14:10:10,562 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:41837
-2022-08-26 14:10:10,568 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:44309
-2022-08-26 14:10:10,568 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:44309
-2022-08-26 14:10:10,568 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:10:10,568 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45589
-2022-08-26 14:10:10,568 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34077
-2022-08-26 14:10:10,568 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:10,568 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:10:10,568 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:10:10,568 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-33n0nbc1
-2022-08-26 14:10:10,568 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:10,569 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:46341
-2022-08-26 14:10:10,569 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:46341
-2022-08-26 14:10:10,569 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:10:10,569 - distributed.worker - INFO -          dashboard at:            127.0.0.1:44585
-2022-08-26 14:10:10,569 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34077
-2022-08-26 14:10:10,569 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:10,569 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:10:10,569 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:10:10,569 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-8m3f4dl8
-2022-08-26 14:10:10,569 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:10,570 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:46379
-2022-08-26 14:10:10,570 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:46379
-2022-08-26 14:10:10,570 - distributed.worker - INFO -           Worker name:                          2
-2022-08-26 14:10:10,570 - distributed.worker - INFO -          dashboard at:            127.0.0.1:37057
-2022-08-26 14:10:10,570 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34077
-2022-08-26 14:10:10,570 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:10,570 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:10:10,570 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:10:10,570 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-tdkrsagc
-2022-08-26 14:10:10,570 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:10,574 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:44309', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:10:10,574 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:44309
-2022-08-26 14:10:10,575 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:10,575 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:46341', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:10:10,575 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:46341
-2022-08-26 14:10:10,575 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:10,576 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:46379', name: 2, status: init, memory: 0, processing: 0>
-2022-08-26 14:10:10,576 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:46379
-2022-08-26 14:10:10,576 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:10,576 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34077
-2022-08-26 14:10:10,576 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:10,576 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34077
-2022-08-26 14:10:10,577 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:10,577 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34077
-2022-08-26 14:10:10,577 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:10,577 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:10,577 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:10,577 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:10,591 - distributed.scheduler - INFO - Receive client connection: Client-7832bdbd-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:10,591 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:10,603 - distributed.scheduler - INFO - Remove client Client-7832bdbd-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:10,603 - distributed.scheduler - INFO - Remove client Client-7832bdbd-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:10,603 - distributed.scheduler - INFO - Close client connection: Client-7832bdbd-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:10,605 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:44309
-2022-08-26 14:10:10,605 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:46341
-2022-08-26 14:10:10,605 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:46379
-2022-08-26 14:10:10,606 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:46341', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:10:10,606 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:46341
-2022-08-26 14:10:10,606 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-ef654769-d208-409d-a1de-e0f6d149c21e Address tcp://127.0.0.1:46341 Status: Status.closing
-2022-08-26 14:10:10,607 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-13a843f9-712e-4a59-b4f5-14020da411e7 Address tcp://127.0.0.1:44309 Status: Status.closing
-2022-08-26 14:10:10,607 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-281d6e02-cbc8-43f1-b472-b01e00e842ca Address tcp://127.0.0.1:46379 Status: Status.closing
-2022-08-26 14:10:10,608 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:44309', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:10:10,608 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:44309
-2022-08-26 14:10:10,608 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:46379', name: 2, status: closing, memory: 0, processing: 0>
-2022-08-26 14:10:10,608 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:46379
-2022-08-26 14:10:10,608 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:10:10,609 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:10:10,609 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:10:10,822 - distributed.utils_perf - WARNING - full garbage collections took 76% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_gather_on_worker_key_not_on_sender_replicated[True] 2022-08-26 14:10:10,828 - distributed.scheduler - INFO - State start
-2022-08-26 14:10:10,830 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:10:10,830 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:38373
-2022-08-26 14:10:10,830 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:40339
-2022-08-26 14:10:10,836 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39315
-2022-08-26 14:10:10,836 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39315
-2022-08-26 14:10:10,836 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:10:10,836 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45353
-2022-08-26 14:10:10,837 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:38373
-2022-08-26 14:10:10,837 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:10,837 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:10:10,837 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:10:10,837 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ia_ed8_8
-2022-08-26 14:10:10,837 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:10,837 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34817
-2022-08-26 14:10:10,837 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34817
-2022-08-26 14:10:10,837 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:10:10,838 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46447
-2022-08-26 14:10:10,838 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:38373
-2022-08-26 14:10:10,838 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:10,838 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:10:10,838 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:10:10,838 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ho58p81a
-2022-08-26 14:10:10,838 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:10,838 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38053
-2022-08-26 14:10:10,838 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38053
-2022-08-26 14:10:10,839 - distributed.worker - INFO -           Worker name:                          2
-2022-08-26 14:10:10,839 - distributed.worker - INFO -          dashboard at:            127.0.0.1:44001
-2022-08-26 14:10:10,839 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:38373
-2022-08-26 14:10:10,839 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:10,839 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:10:10,839 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:10:10,839 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-2jq7xrpj
-2022-08-26 14:10:10,839 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:10,843 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39315', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:10:10,843 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39315
-2022-08-26 14:10:10,843 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:10,843 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:34817', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:10:10,844 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:34817
-2022-08-26 14:10:10,844 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:10,844 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38053', name: 2, status: init, memory: 0, processing: 0>
-2022-08-26 14:10:10,844 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38053
-2022-08-26 14:10:10,844 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:10,845 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:38373
-2022-08-26 14:10:10,845 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:10,845 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:38373
-2022-08-26 14:10:10,845 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:10,845 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:38373
-2022-08-26 14:10:10,845 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:10,846 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:10,846 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:10,846 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:10,860 - distributed.scheduler - INFO - Receive client connection: Client-785bbab4-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:10,860 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:10,872 - distributed.scheduler - INFO - Remove client Client-785bbab4-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:10,872 - distributed.scheduler - INFO - Remove client Client-785bbab4-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:10,872 - distributed.scheduler - INFO - Close client connection: Client-785bbab4-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:10,874 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:39315
-2022-08-26 14:10:10,874 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:34817
-2022-08-26 14:10:10,874 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:38053
-2022-08-26 14:10:10,875 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:34817', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:10:10,875 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:34817
-2022-08-26 14:10:10,875 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-9820681b-1a0d-478d-92d0-b36967506dea Address tcp://127.0.0.1:34817 Status: Status.closing
-2022-08-26 14:10:10,876 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-c8581bd5-b42b-41f9-a00a-7c9019175c53 Address tcp://127.0.0.1:39315 Status: Status.closing
-2022-08-26 14:10:10,876 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-675d4079-a357-425f-a251-0d0635b0c2f4 Address tcp://127.0.0.1:38053 Status: Status.closing
-2022-08-26 14:10:10,877 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:39315', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:10:10,877 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:39315
-2022-08-26 14:10:10,877 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:38053', name: 2, status: closing, memory: 0, processing: 0>
-2022-08-26 14:10:10,877 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:38053
-2022-08-26 14:10:10,877 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:10:10,878 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:10:10,878 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:10:11,092 - distributed.utils_perf - WARNING - full garbage collections took 76% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_gather_on_worker_duplicate_task 2022-08-26 14:10:11,098 - distributed.scheduler - INFO - State start
-2022-08-26 14:10:11,099 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:10:11,099 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:41851
-2022-08-26 14:10:11,099 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:42681
-2022-08-26 14:10:11,106 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34101
-2022-08-26 14:10:11,106 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34101
-2022-08-26 14:10:11,106 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:10:11,106 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39245
-2022-08-26 14:10:11,106 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:41851
-2022-08-26 14:10:11,106 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:11,106 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:10:11,106 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:10:11,106 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-rpcl9j3r
-2022-08-26 14:10:11,106 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:11,107 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:44159
-2022-08-26 14:10:11,107 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:44159
-2022-08-26 14:10:11,107 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:10:11,107 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39153
-2022-08-26 14:10:11,107 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:41851
-2022-08-26 14:10:11,107 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:11,107 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:10:11,107 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:10:11,107 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-sd2cyj4j
-2022-08-26 14:10:11,107 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:11,108 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39655
-2022-08-26 14:10:11,108 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39655
-2022-08-26 14:10:11,108 - distributed.worker - INFO -           Worker name:                          2
-2022-08-26 14:10:11,108 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39175
-2022-08-26 14:10:11,108 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:41851
-2022-08-26 14:10:11,108 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:11,108 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:10:11,108 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:10:11,108 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ua5s4po6
-2022-08-26 14:10:11,108 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:11,112 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:34101', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:10:11,112 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:34101
-2022-08-26 14:10:11,112 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:11,113 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:44159', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:10:11,113 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:44159
-2022-08-26 14:10:11,113 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:11,113 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39655', name: 2, status: init, memory: 0, processing: 0>
-2022-08-26 14:10:11,114 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39655
-2022-08-26 14:10:11,114 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:11,114 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:41851
-2022-08-26 14:10:11,114 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:11,114 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:41851
-2022-08-26 14:10:11,114 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:11,115 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:41851
-2022-08-26 14:10:11,115 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:11,115 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:11,115 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:11,115 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:11,129 - distributed.scheduler - INFO - Receive client connection: Client-7884cf76-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:11,129 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:11,152 - distributed.scheduler - INFO - Remove client Client-7884cf76-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:11,152 - distributed.scheduler - INFO - Remove client Client-7884cf76-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:11,152 - distributed.scheduler - INFO - Close client connection: Client-7884cf76-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:11,153 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:34101
-2022-08-26 14:10:11,153 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:44159
-2022-08-26 14:10:11,153 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:39655
-2022-08-26 14:10:11,154 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:34101', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:10:11,154 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:34101
-2022-08-26 14:10:11,155 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:44159', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:10:11,155 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:44159
-2022-08-26 14:10:11,155 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:39655', name: 2, status: closing, memory: 0, processing: 0>
-2022-08-26 14:10:11,155 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:39655
-2022-08-26 14:10:11,155 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:10:11,155 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-cdf6553b-c48e-42c0-ac96-49512d5563e9 Address tcp://127.0.0.1:34101 Status: Status.closing
-2022-08-26 14:10:11,155 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-5b024528-1f91-43c1-be35-b92660953753 Address tcp://127.0.0.1:44159 Status: Status.closing
-2022-08-26 14:10:11,155 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-3c8c572b-86a1-4a0a-a674-95144c3439bd Address tcp://127.0.0.1:39655 Status: Status.closing
-2022-08-26 14:10:11,157 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:10:11,157 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:10:11,371 - distributed.utils_perf - WARNING - full garbage collections took 76% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_rebalance_dead_recipient 2022-08-26 14:10:11,377 - distributed.scheduler - INFO - State start
-2022-08-26 14:10:11,379 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:10:11,379 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:38183
-2022-08-26 14:10:11,379 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:35491
-2022-08-26 14:10:11,385 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:41133
-2022-08-26 14:10:11,385 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:41133
-2022-08-26 14:10:11,385 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:10:11,385 - distributed.worker - INFO -          dashboard at:            127.0.0.1:38671
-2022-08-26 14:10:11,385 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:38183
-2022-08-26 14:10:11,385 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:11,385 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:10:11,386 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:10:11,386 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-yg6h5h9k
-2022-08-26 14:10:11,386 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:11,386 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38017
-2022-08-26 14:10:11,386 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38017
-2022-08-26 14:10:11,386 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:10:11,386 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36341
-2022-08-26 14:10:11,386 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:38183
-2022-08-26 14:10:11,386 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:11,387 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:10:11,387 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:10:11,387 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-11h46jor
-2022-08-26 14:10:11,387 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:11,387 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39493
-2022-08-26 14:10:11,387 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39493
-2022-08-26 14:10:11,387 - distributed.worker - INFO -           Worker name:                          2
-2022-08-26 14:10:11,387 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33975
-2022-08-26 14:10:11,387 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:38183
-2022-08-26 14:10:11,388 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:11,388 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:10:11,388 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:10:11,388 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-e17gh1qp
-2022-08-26 14:10:11,388 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:11,392 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:41133', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:10:11,392 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:41133
-2022-08-26 14:10:11,392 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:11,392 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38017', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:10:11,393 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38017
-2022-08-26 14:10:11,393 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:11,393 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39493', name: 2, status: init, memory: 0, processing: 0>
-2022-08-26 14:10:11,393 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39493
-2022-08-26 14:10:11,393 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:11,394 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:38183
-2022-08-26 14:10:11,394 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:11,394 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:38183
-2022-08-26 14:10:11,394 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:11,394 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:38183
-2022-08-26 14:10:11,394 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:11,395 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:11,395 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:11,395 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:11,409 - distributed.scheduler - INFO - Receive client connection: Client-78af7de9-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:11,409 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:11,412 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:39493
-2022-08-26 14:10:11,413 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:39493', name: 2, status: closing, memory: 0, processing: 0>
-2022-08-26 14:10:11,413 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:39493
-2022-08-26 14:10:11,413 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-6661371f-bdd0-425a-a86c-2dc53b55c6c2 Address tcp://127.0.0.1:39493 Status: Status.closing
-2022-08-26 14:10:11,514 - distributed.scheduler - WARNING - Communication with worker tcp://127.0.0.1:39493 failed during replication: OSError: Timed out trying to connect to tcp://127.0.0.1:39493 after 0.1 s
-2022-08-26 14:10:11,528 - distributed.scheduler - INFO - Remove client Client-78af7de9-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:11,528 - distributed.scheduler - INFO - Remove client Client-78af7de9-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:11,528 - distributed.scheduler - INFO - Close client connection: Client-78af7de9-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:11,529 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:41133
-2022-08-26 14:10:11,529 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:38017
-2022-08-26 14:10:11,530 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:41133', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:10:11,530 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:41133
-2022-08-26 14:10:11,530 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:38017', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:10:11,530 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:38017
-2022-08-26 14:10:11,530 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:10:11,530 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-d9044712-a09d-4f56-a7ba-66eceb9dc2b3 Address tcp://127.0.0.1:41133 Status: Status.closing
-2022-08-26 14:10:11,530 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-96e6ba63-c8ef-43b8-a475-676c4419029e Address tcp://127.0.0.1:38017 Status: Status.closing
-2022-08-26 14:10:11,532 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:10:11,532 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:10:11,745 - distributed.utils_perf - WARNING - full garbage collections took 76% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_delete_worker_data 2022-08-26 14:10:11,751 - distributed.scheduler - INFO - State start
-2022-08-26 14:10:11,753 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:10:11,753 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:36305
-2022-08-26 14:10:11,753 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:39331
-2022-08-26 14:10:11,757 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45143
-2022-08-26 14:10:11,758 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45143
-2022-08-26 14:10:11,758 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:10:11,758 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42291
-2022-08-26 14:10:11,758 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36305
-2022-08-26 14:10:11,758 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:11,758 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:10:11,758 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:10:11,758 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-i8xzjheq
-2022-08-26 14:10:11,758 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:11,759 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36197
-2022-08-26 14:10:11,759 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36197
-2022-08-26 14:10:11,759 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:10:11,759 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34523
-2022-08-26 14:10:11,759 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36305
-2022-08-26 14:10:11,759 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:11,759 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:10:11,759 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:10:11,759 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-e38i_317
-2022-08-26 14:10:11,759 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:11,762 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45143', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:10:11,762 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45143
-2022-08-26 14:10:11,762 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:11,763 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36197', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:10:11,763 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36197
-2022-08-26 14:10:11,763 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:11,763 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36305
-2022-08-26 14:10:11,763 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:11,764 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36305
-2022-08-26 14:10:11,764 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:11,764 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:11,764 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:11,778 - distributed.scheduler - INFO - Receive client connection: Client-78e7ca3b-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:11,778 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:11,789 - distributed.scheduler - INFO - Remove client Client-78e7ca3b-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:11,790 - distributed.scheduler - INFO - Remove client Client-78e7ca3b-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:11,790 - distributed.scheduler - INFO - Close client connection: Client-78e7ca3b-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:11,791 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:45143
-2022-08-26 14:10:11,792 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36197
-2022-08-26 14:10:11,792 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-ed8cc24c-29cc-440b-ba57-8f97c3c5c8c1 Address tcp://127.0.0.1:45143 Status: Status.closing
-2022-08-26 14:10:11,793 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-bbb412de-f220-4e05-83da-a575a62920d7 Address tcp://127.0.0.1:36197 Status: Status.closing
-2022-08-26 14:10:11,793 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:45143', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:10:11,793 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:45143
-2022-08-26 14:10:11,793 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:36197', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:10:11,793 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:36197
-2022-08-26 14:10:11,794 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:10:11,794 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:10:11,795 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:10:12,008 - distributed.utils_perf - WARNING - full garbage collections took 76% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_delete_worker_data_double_delete 2022-08-26 14:10:12,014 - distributed.scheduler - INFO - State start
-2022-08-26 14:10:12,015 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:10:12,015 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:35657
-2022-08-26 14:10:12,015 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:40307
-2022-08-26 14:10:12,018 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:44745
-2022-08-26 14:10:12,018 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:44745
-2022-08-26 14:10:12,018 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:10:12,018 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45685
-2022-08-26 14:10:12,018 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35657
-2022-08-26 14:10:12,018 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:12,018 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:10:12,018 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:10:12,019 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-7caq29a8
-2022-08-26 14:10:12,019 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:12,020 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:44745', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:10:12,021 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:44745
-2022-08-26 14:10:12,021 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:12,021 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35657
-2022-08-26 14:10:12,021 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:12,021 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:12,034 - distributed.scheduler - INFO - Receive client connection: Client-790eff05-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:12,035 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:12,046 - distributed.scheduler - INFO - Remove client Client-790eff05-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:12,046 - distributed.scheduler - INFO - Remove client Client-790eff05-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:12,046 - distributed.scheduler - INFO - Close client connection: Client-790eff05-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:12,047 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:44745
-2022-08-26 14:10:12,048 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-a2971160-d81e-4040-a557-1918647e59f0 Address tcp://127.0.0.1:44745 Status: Status.closing
-2022-08-26 14:10:12,048 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:44745', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:10:12,048 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:44745
-2022-08-26 14:10:12,048 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:10:12,049 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:10:12,049 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:10:12,262 - distributed.utils_perf - WARNING - full garbage collections took 76% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_delete_worker_data_bad_worker 2022-08-26 14:10:12,267 - distributed.scheduler - INFO - State start
-2022-08-26 14:10:12,269 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:10:12,269 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:40253
-2022-08-26 14:10:12,269 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:45157
-2022-08-26 14:10:12,274 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37819
-2022-08-26 14:10:12,274 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37819
-2022-08-26 14:10:12,274 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:10:12,274 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42131
-2022-08-26 14:10:12,274 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:40253
-2022-08-26 14:10:12,274 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:12,274 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:10:12,274 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:10:12,274 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-efayx6wp
-2022-08-26 14:10:12,274 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:12,275 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45025
-2022-08-26 14:10:12,275 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45025
-2022-08-26 14:10:12,275 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:10:12,275 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35621
-2022-08-26 14:10:12,275 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:40253
-2022-08-26 14:10:12,275 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:12,275 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:10:12,275 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:10:12,275 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-_dlb3gz7
-2022-08-26 14:10:12,275 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:12,278 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37819', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:10:12,279 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37819
-2022-08-26 14:10:12,279 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:12,279 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45025', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:10:12,279 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45025
-2022-08-26 14:10:12,279 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:12,280 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:40253
-2022-08-26 14:10:12,280 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:12,280 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:40253
-2022-08-26 14:10:12,280 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:12,280 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:12,280 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:12,291 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:37819
-2022-08-26 14:10:12,292 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:37819', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:10:12,292 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:37819
-2022-08-26 14:10:12,292 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-d4c06ff6-d63a-49ea-9182-2dba82d7090b Address tcp://127.0.0.1:37819 Status: Status.closing
-2022-08-26 14:10:12,394 - distributed.scheduler - WARNING - Communication with worker tcp://127.0.0.1:37819 failed during replication: OSError: Timed out trying to connect to tcp://127.0.0.1:37819 after 0.1 s
-2022-08-26 14:10:12,394 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:45025
-2022-08-26 14:10:12,394 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:45025', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:10:12,395 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:45025
-2022-08-26 14:10:12,395 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:10:12,395 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-0ee6c005-bfcd-4218-8f5e-5162122e25f0 Address tcp://127.0.0.1:45025 Status: Status.closing
-2022-08-26 14:10:12,395 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:10:12,395 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:10:12,608 - distributed.utils_perf - WARNING - full garbage collections took 79% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_delete_worker_data_bad_task[False] 2022-08-26 14:10:12,614 - distributed.scheduler - INFO - State start
-2022-08-26 14:10:12,616 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:10:12,616 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:40123
-2022-08-26 14:10:12,616 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:41437
-2022-08-26 14:10:12,619 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39729
-2022-08-26 14:10:12,619 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39729
-2022-08-26 14:10:12,619 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:10:12,619 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34759
-2022-08-26 14:10:12,619 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:40123
-2022-08-26 14:10:12,619 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:12,619 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:10:12,619 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:10:12,619 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-by283m6g
-2022-08-26 14:10:12,619 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:12,621 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39729', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:10:12,621 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39729
-2022-08-26 14:10:12,621 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:12,622 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:40123
-2022-08-26 14:10:12,622 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:12,622 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:12,635 - distributed.scheduler - INFO - Receive client connection: Client-796aa711-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:12,636 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:12,647 - distributed.scheduler - INFO - Remove client Client-796aa711-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:12,647 - distributed.scheduler - INFO - Remove client Client-796aa711-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:12,647 - distributed.scheduler - INFO - Close client connection: Client-796aa711-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:12,648 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:39729
-2022-08-26 14:10:12,649 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-74e5d1c9-37b9-4a6c-b770-ba705dcb0906 Address tcp://127.0.0.1:39729 Status: Status.closing
-2022-08-26 14:10:12,649 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:39729', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:10:12,649 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:39729
-2022-08-26 14:10:12,649 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:10:12,650 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:10:12,650 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:10:12,862 - distributed.utils_perf - WARNING - full garbage collections took 79% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_delete_worker_data_bad_task[True] 2022-08-26 14:10:12,868 - distributed.scheduler - INFO - State start
-2022-08-26 14:10:12,870 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:10:12,870 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:40107
-2022-08-26 14:10:12,870 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:40793
-2022-08-26 14:10:12,873 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33179
-2022-08-26 14:10:12,873 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33179
-2022-08-26 14:10:12,873 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:10:12,873 - distributed.worker - INFO -          dashboard at:            127.0.0.1:38595
-2022-08-26 14:10:12,873 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:40107
-2022-08-26 14:10:12,873 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:12,873 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:10:12,873 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:10:12,873 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-e1y33dkm
-2022-08-26 14:10:12,873 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:12,875 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33179', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:10:12,875 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33179
-2022-08-26 14:10:12,875 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:12,876 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:40107
-2022-08-26 14:10:12,876 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:12,876 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:12,889 - distributed.scheduler - INFO - Receive client connection: Client-79916907-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:12,889 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:12,901 - distributed.scheduler - INFO - Remove client Client-79916907-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:12,901 - distributed.scheduler - INFO - Remove client Client-79916907-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:12,901 - distributed.scheduler - INFO - Close client connection: Client-79916907-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:12,902 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:33179
-2022-08-26 14:10:12,903 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-4db16bfd-c607-43b9-8905-1a2ba517180d Address tcp://127.0.0.1:33179 Status: Status.closing
-2022-08-26 14:10:12,903 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:33179', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:10:12,903 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:33179
-2022-08-26 14:10:12,903 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:10:12,904 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:10:12,904 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:10:13,116 - distributed.utils_perf - WARNING - full garbage collections took 79% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_computations 2022-08-26 14:10:13,122 - distributed.scheduler - INFO - State start
-2022-08-26 14:10:13,123 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:10:13,123 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:42123
-2022-08-26 14:10:13,124 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:42313
-2022-08-26 14:10:13,128 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33761
-2022-08-26 14:10:13,128 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33761
-2022-08-26 14:10:13,128 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:10:13,128 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33915
-2022-08-26 14:10:13,128 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:42123
-2022-08-26 14:10:13,128 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:13,128 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:10:13,128 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:10:13,128 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-kh98_wzx
-2022-08-26 14:10:13,128 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:13,129 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35009
-2022-08-26 14:10:13,129 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35009
-2022-08-26 14:10:13,129 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:10:13,129 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40409
-2022-08-26 14:10:13,129 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:42123
-2022-08-26 14:10:13,129 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:13,129 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:10:13,129 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:10:13,129 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-vymo0rw8
-2022-08-26 14:10:13,130 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:13,132 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33761', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:10:13,133 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33761
-2022-08-26 14:10:13,133 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:13,133 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:35009', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:10:13,133 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:35009
-2022-08-26 14:10:13,133 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:13,134 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:42123
-2022-08-26 14:10:13,134 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:13,134 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:42123
-2022-08-26 14:10:13,134 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:13,134 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:13,134 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:13,148 - distributed.scheduler - INFO - Receive client connection: Client-79b8e308-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:13,148 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:13,199 - distributed.scheduler - INFO - Remove client Client-79b8e308-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:13,199 - distributed.scheduler - INFO - Remove client Client-79b8e308-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:13,199 - distributed.scheduler - INFO - Close client connection: Client-79b8e308-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:13,200 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:33761
-2022-08-26 14:10:13,200 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:35009
-2022-08-26 14:10:13,201 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:33761', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:10:13,201 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:33761
-2022-08-26 14:10:13,201 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:35009', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:10:13,201 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:35009
-2022-08-26 14:10:13,201 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:10:13,202 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-995eb17d-9ef0-4a37-a651-c376bb8b0336 Address tcp://127.0.0.1:33761 Status: Status.closing
-2022-08-26 14:10:13,202 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-abeb49b5-e9c3-4517-954c-9d9dbbf73c14 Address tcp://127.0.0.1:35009 Status: Status.closing
-2022-08-26 14:10:13,203 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:10:13,203 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:10:13,417 - distributed.utils_perf - WARNING - full garbage collections took 78% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_computations_futures 2022-08-26 14:10:13,423 - distributed.scheduler - INFO - State start
-2022-08-26 14:10:13,425 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:10:13,425 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:45061
-2022-08-26 14:10:13,425 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:38771
-2022-08-26 14:10:13,430 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40117
-2022-08-26 14:10:13,430 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40117
-2022-08-26 14:10:13,430 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:10:13,430 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34217
-2022-08-26 14:10:13,430 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:45061
-2022-08-26 14:10:13,430 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:13,430 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:10:13,430 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:10:13,430 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-1r58zdsa
-2022-08-26 14:10:13,430 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:13,431 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42955
-2022-08-26 14:10:13,431 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42955
-2022-08-26 14:10:13,431 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:10:13,431 - distributed.worker - INFO -          dashboard at:            127.0.0.1:37345
-2022-08-26 14:10:13,431 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:45061
-2022-08-26 14:10:13,431 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:13,431 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:10:13,431 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:10:13,431 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-zwi5pxxr
-2022-08-26 14:10:13,431 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:13,434 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40117', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:10:13,434 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40117
-2022-08-26 14:10:13,434 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:13,435 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42955', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:10:13,435 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42955
-2022-08-26 14:10:13,435 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:13,435 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:45061
-2022-08-26 14:10:13,435 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:13,436 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:45061
-2022-08-26 14:10:13,436 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:13,436 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:13,436 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:13,450 - distributed.scheduler - INFO - Receive client connection: Client-79e6e98a-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:13,450 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:13,492 - distributed.scheduler - INFO - Remove client Client-79e6e98a-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:13,492 - distributed.scheduler - INFO - Remove client Client-79e6e98a-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:13,492 - distributed.scheduler - INFO - Close client connection: Client-79e6e98a-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:13,493 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:40117
-2022-08-26 14:10:13,493 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42955
-2022-08-26 14:10:13,494 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:40117', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:10:13,494 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:40117
-2022-08-26 14:10:13,494 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:42955', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:10:13,494 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:42955
-2022-08-26 14:10:13,494 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:10:13,494 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-c237a45e-47d9-4e83-98df-d6de3a9138bf Address tcp://127.0.0.1:40117 Status: Status.closing
-2022-08-26 14:10:13,495 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-e5814fa4-f8af-4b9c-8120-24a6293f45d2 Address tcp://127.0.0.1:42955 Status: Status.closing
-2022-08-26 14:10:13,496 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:10:13,496 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:10:13,710 - distributed.utils_perf - WARNING - full garbage collections took 78% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_transition_counter 2022-08-26 14:10:13,716 - distributed.scheduler - INFO - State start
-2022-08-26 14:10:13,717 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:10:13,718 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:37029
-2022-08-26 14:10:13,718 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:40231
-2022-08-26 14:10:13,720 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42987
-2022-08-26 14:10:13,720 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42987
-2022-08-26 14:10:13,721 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:10:13,721 - distributed.worker - INFO -          dashboard at:            127.0.0.1:37663
-2022-08-26 14:10:13,721 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37029
-2022-08-26 14:10:13,721 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:13,721 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:10:13,721 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:10:13,721 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-rxwbr9w3
-2022-08-26 14:10:13,721 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:13,723 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42987', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:10:13,723 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42987
-2022-08-26 14:10:13,723 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:13,723 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37029
-2022-08-26 14:10:13,723 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:13,724 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:13,737 - distributed.scheduler - INFO - Receive client connection: Client-7a12c152-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:13,737 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:13,760 - distributed.scheduler - INFO - Remove client Client-7a12c152-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:13,761 - distributed.scheduler - INFO - Remove client Client-7a12c152-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:13,761 - distributed.scheduler - INFO - Close client connection: Client-7a12c152-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:13,762 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42987
-2022-08-26 14:10:13,762 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-0fe6f7e7-61b2-454c-bc93-43d82d3b3d94 Address tcp://127.0.0.1:42987 Status: Status.closing
-2022-08-26 14:10:13,763 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:42987', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:10:13,763 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:42987
-2022-08-26 14:10:13,763 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:10:13,764 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:10:13,764 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:10:13,977 - distributed.utils_perf - WARNING - full garbage collections took 80% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_transition_counter_max_scheduler SKIPPED
-distributed/tests/test_scheduler.py::test_transition_counter_max_worker 2022-08-26 14:10:13,984 - distributed.scheduler - INFO - State start
-2022-08-26 14:10:13,985 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:10:13,985 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:46149
-2022-08-26 14:10:13,985 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:39115
-2022-08-26 14:10:13,988 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45409
-2022-08-26 14:10:13,988 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45409
-2022-08-26 14:10:13,988 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:10:13,988 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41283
-2022-08-26 14:10:13,988 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:46149
-2022-08-26 14:10:13,988 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:13,988 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:10:13,989 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:10:13,989 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-5c7it9ws
-2022-08-26 14:10:13,989 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:13,990 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45409', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:10:13,991 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45409
-2022-08-26 14:10:13,991 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:13,991 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:46149
-2022-08-26 14:10:13,991 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:13,991 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:14,005 - distributed.scheduler - INFO - Receive client connection: Client-7a3b9c8e-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:14,005 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:14,017 - distributed.worker - ERROR - TransitionCounterMaxExceeded: inc-deebfb8e8b05bf230e909b88993dc421 :: released->waiting
-  Story:
-    ('inc-deebfb8e8b05bf230e909b88993dc421', 'compute-task', 'released', 'compute-task-1661548214.0167513', 1661548214.0171123)
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 192, in wrapper
-    return method(self, *args, **kwargs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 1868, in handle_stimulus
-    super().handle_stimulus(*stims)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 3384, in handle_stimulus
-    instructions = self.state.handle_stimulus(*stims)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 1301, in handle_stimulus
-    instructions += self._transitions(recs, stimulus_id=stim.stimulus_id)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 2512, in _transitions
-    process_recs(recommendations.copy())
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 2506, in process_recs
-    a_recs, a_instructions = self._transition(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 2421, in _transition
-    raise TransitionCounterMaxExceeded(ts.key, start, finish, self.story(ts))
-distributed.worker_state_machine.TransitionCounterMaxExceeded: TransitionCounterMaxExceeded: inc-deebfb8e8b05bf230e909b88993dc421 :: released->waiting
-  Story:
-    ('inc-deebfb8e8b05bf230e909b88993dc421', 'compute-task', 'released', 'compute-task-1661548214.0167513', 1661548214.0171123)
-2022-08-26 14:10:14,018 - distributed.core - ERROR - TransitionCounterMaxExceeded: inc-deebfb8e8b05bf230e909b88993dc421 :: released->waiting
-  Story:
-    ('inc-deebfb8e8b05bf230e909b88993dc421', 'compute-task', 'released', 'compute-task-1661548214.0167513', 1661548214.0171123)
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 842, in handle_stream
-    handler(**merge(extra, msg))
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 1843, in _
-    self.handle_stimulus(event)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 192, in wrapper
-    return method(self, *args, **kwargs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 1868, in handle_stimulus
-    super().handle_stimulus(*stims)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 3384, in handle_stimulus
-    instructions = self.state.handle_stimulus(*stims)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 1301, in handle_stimulus
-    instructions += self._transitions(recs, stimulus_id=stim.stimulus_id)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 2512, in _transitions
-    process_recs(recommendations.copy())
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 2506, in process_recs
-    a_recs, a_instructions = self._transition(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 2421, in _transition
-    raise TransitionCounterMaxExceeded(ts.key, start, finish, self.story(ts))
-distributed.worker_state_machine.TransitionCounterMaxExceeded: TransitionCounterMaxExceeded: inc-deebfb8e8b05bf230e909b88993dc421 :: released->waiting
-  Story:
-    ('inc-deebfb8e8b05bf230e909b88993dc421', 'compute-task', 'released', 'compute-task-1661548214.0167513', 1661548214.0171123)
-2022-08-26 14:10:14,020 - distributed.worker - ERROR - TransitionCounterMaxExceeded: inc-deebfb8e8b05bf230e909b88993dc421 :: released->waiting
-  Story:
-    ('inc-deebfb8e8b05bf230e909b88993dc421', 'compute-task', 'released', 'compute-task-1661548214.0167513', 1661548214.0171123)
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 179, in wrapper
-    return await method(self, *args, **kwargs)  # type: ignore
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 1210, in handle_scheduler
-    await self.handle_stream(comm)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 842, in handle_stream
-    handler(**merge(extra, msg))
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 1843, in _
-    self.handle_stimulus(event)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 192, in wrapper
-    return method(self, *args, **kwargs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 1868, in handle_stimulus
-    super().handle_stimulus(*stims)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 3384, in handle_stimulus
-    instructions = self.state.handle_stimulus(*stims)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 1301, in handle_stimulus
-    instructions += self._transitions(recs, stimulus_id=stim.stimulus_id)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 2512, in _transitions
-    process_recs(recommendations.copy())
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 2506, in process_recs
-    a_recs, a_instructions = self._transition(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 2421, in _transition
-    raise TransitionCounterMaxExceeded(ts.key, start, finish, self.story(ts))
-distributed.worker_state_machine.TransitionCounterMaxExceeded: TransitionCounterMaxExceeded: inc-deebfb8e8b05bf230e909b88993dc421 :: released->waiting
-  Story:
-    ('inc-deebfb8e8b05bf230e909b88993dc421', 'compute-task', 'released', 'compute-task-1661548214.0167513', 1661548214.0171123)
-2022-08-26 14:10:14,020 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:45409
-2022-08-26 14:10:14,020 - distributed.worker - INFO - Not waiting on executor to close
-2022-08-26 14:10:14,020 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:45409', name: 0, status: running, memory: 0, processing: 1>
-2022-08-26 14:10:14,021 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:45409
-2022-08-26 14:10:14,021 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:10:14,021 - distributed.batched - INFO - Batched Comm Closed <TCP (closed) Worker->Scheduler local=tcp://127.0.0.1:42036 remote=tcp://127.0.0.1:46149>
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/batched.py", line 115, in _background_send
-    nbytes = yield coro
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/tornado/gen.py", line 769, in run
-    value = future.result()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 269, in write
-    raise CommClosedError()
-distributed.comm.core.CommClosedError
-2022-08-26 14:10:14,022 - tornado.application - ERROR - Exception in callback functools.partial(<bound method IOLoop._discard_future_result of <tornado.platform.asyncio.AsyncIOMainLoop object at 0x5640415b18e0>>, <Task finished name='Task-170813' coro=<Worker.handle_scheduler() done, defined at /home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py:176> exception=TransitionCounterMaxExceeded: inc-deebfb8e8b05bf230e909b88993dc421 :: released->waiting
-  Story:
-    ('inc-deebfb8e8b05bf230e909b88993dc421', 'compute-task', 'released', 'compute-task-1661548214.0167513', 1661548214.0171123)>)
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/tornado/ioloop.py", line 740, in _run_callback
-    ret = callback()
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/tornado/ioloop.py", line 764, in _discard_future_result
-    future.result()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 179, in wrapper
-    return await method(self, *args, **kwargs)  # type: ignore
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 1210, in handle_scheduler
-    await self.handle_stream(comm)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 842, in handle_stream
-    handler(**merge(extra, msg))
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 1843, in _
-    self.handle_stimulus(event)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 192, in wrapper
-    return method(self, *args, **kwargs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 1868, in handle_stimulus
-    super().handle_stimulus(*stims)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 3384, in handle_stimulus
-    instructions = self.state.handle_stimulus(*stims)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 1301, in handle_stimulus
-    instructions += self._transitions(recs, stimulus_id=stim.stimulus_id)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 2512, in _transitions
-    process_recs(recommendations.copy())
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 2506, in process_recs
-    a_recs, a_instructions = self._transition(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 2421, in _transition
-    raise TransitionCounterMaxExceeded(ts.key, start, finish, self.story(ts))
-distributed.worker_state_machine.TransitionCounterMaxExceeded: TransitionCounterMaxExceeded: inc-deebfb8e8b05bf230e909b88993dc421 :: released->waiting
-  Story:
-    ('inc-deebfb8e8b05bf230e909b88993dc421', 'compute-task', 'released', 'compute-task-1661548214.0167513', 1661548214.0171123)
-2022-08-26 14:10:14,027 - distributed.worker - ERROR - Validate state failed
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 2521, in validate_state
-    self.state.validate_state()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 3330, in validate_state
-    assert self.transition_counter < self.transition_counter_max
-AssertionError
-2022-08-26 14:10:14,027 - distributed.worker - ERROR - 
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 2521, in validate_state
-    self.state.validate_state()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 3330, in validate_state
-    assert self.transition_counter < self.transition_counter_max
-AssertionError
-2022-08-26 14:10:14,028 - distributed.scheduler - INFO - Remove client Client-7a3b9c8e-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:14,028 - distributed.scheduler - INFO - Remove client Client-7a3b9c8e-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:14,028 - distributed.scheduler - INFO - Close client connection: Client-7a3b9c8e-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:14,029 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:10:14,029 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:10:14,241 - distributed.utils_perf - WARNING - full garbage collections took 81% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_disable_transition_counter_max 2022-08-26 14:10:15,158 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:10:15,160 - distributed.scheduler - INFO - State start
-2022-08-26 14:10:15,163 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:10:15,163 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:37251
-2022-08-26 14:10:15,163 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:10:15,183 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36869
-2022-08-26 14:10:15,183 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36869
-2022-08-26 14:10:15,183 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39707
-2022-08-26 14:10:15,183 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37251
-2022-08-26 14:10:15,183 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:15,183 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:10:15,183 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:10:15,183 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-872bhcld
-2022-08-26 14:10:15,183 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:15,216 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40809
-2022-08-26 14:10:15,216 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40809
-2022-08-26 14:10:15,216 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33127
-2022-08-26 14:10:15,216 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37251
-2022-08-26 14:10:15,216 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:15,216 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:10:15,216 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:10:15,216 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-qpf_5za4
-2022-08-26 14:10:15,216 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:15,477 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36869', status: init, memory: 0, processing: 0>
-2022-08-26 14:10:15,748 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36869
-2022-08-26 14:10:15,748 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:15,748 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37251
-2022-08-26 14:10:15,748 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:15,749 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40809', status: init, memory: 0, processing: 0>
-2022-08-26 14:10:15,749 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:15,749 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40809
-2022-08-26 14:10:15,749 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:15,749 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37251
-2022-08-26 14:10:15,750 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:15,750 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:15,764 - distributed.scheduler - INFO - State start
-2022-08-26 14:10:15,765 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:10:15,766 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:39651
-2022-08-26 14:10:15,766 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:41663
-2022-08-26 14:10:15,766 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-872bhcld', purging
-2022-08-26 14:10:15,766 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-qpf_5za4', purging
-2022-08-26 14:10:15,769 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:44707
-2022-08-26 14:10:15,769 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:44707
-2022-08-26 14:10:15,769 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:10:15,769 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43537
-2022-08-26 14:10:15,769 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39651
-2022-08-26 14:10:15,769 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:15,769 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:10:15,769 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:10:15,769 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ywhio_yx
-2022-08-26 14:10:15,769 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:15,771 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:44707', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:10:15,771 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:44707
-2022-08-26 14:10:15,771 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:15,772 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39651
-2022-08-26 14:10:15,772 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:15,772 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:15,785 - distributed.scheduler - INFO - Receive client connection: Client-7b4b4887-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:15,785 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:15,807 - distributed.scheduler - INFO - Remove client Client-7b4b4887-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:15,807 - distributed.scheduler - INFO - Remove client Client-7b4b4887-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:15,808 - distributed.scheduler - INFO - Close client connection: Client-7b4b4887-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:15,808 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:44707
-2022-08-26 14:10:15,809 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-f7ca7894-7b08-4388-8cac-c0c303b4730b Address tcp://127.0.0.1:44707 Status: Status.closing
-2022-08-26 14:10:15,809 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:44707', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:10:15,809 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:44707
-2022-08-26 14:10:15,809 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:10:15,810 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:10:15,810 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:10:16,023 - distributed.utils_perf - WARNING - full garbage collections took 81% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_worker_heartbeat_after_cancel 2022-08-26 14:10:16,029 - distributed.scheduler - INFO - State start
-2022-08-26 14:10:16,031 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:10:16,031 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:35331
-2022-08-26 14:10:16,031 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:46399
-2022-08-26 14:10:16,049 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36265
-2022-08-26 14:10:16,050 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36265
-2022-08-26 14:10:16,050 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:10:16,050 - distributed.worker - INFO -          dashboard at:            127.0.0.1:37223
-2022-08-26 14:10:16,050 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35331
-2022-08-26 14:10:16,050 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:16,050 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:10:16,050 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:10:16,050 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-w57wiztp
-2022-08-26 14:10:16,050 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:16,051 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42373
-2022-08-26 14:10:16,051 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42373
-2022-08-26 14:10:16,051 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:10:16,051 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40311
-2022-08-26 14:10:16,051 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35331
-2022-08-26 14:10:16,051 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:16,051 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:10:16,051 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:10:16,051 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-a3uj864c
-2022-08-26 14:10:16,051 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:16,051 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:41663
-2022-08-26 14:10:16,052 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:41663
-2022-08-26 14:10:16,052 - distributed.worker - INFO -           Worker name:                          2
-2022-08-26 14:10:16,052 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36941
-2022-08-26 14:10:16,052 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35331
-2022-08-26 14:10:16,052 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:16,052 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:10:16,052 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:10:16,052 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-888w70o6
-2022-08-26 14:10:16,052 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:16,052 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:44159
-2022-08-26 14:10:16,053 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:44159
-2022-08-26 14:10:16,053 - distributed.worker - INFO -           Worker name:                          3
-2022-08-26 14:10:16,053 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33619
-2022-08-26 14:10:16,053 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35331
-2022-08-26 14:10:16,053 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:16,053 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:10:16,053 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:10:16,053 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-_ykg8bab
-2022-08-26 14:10:16,053 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:16,053 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:44909
-2022-08-26 14:10:16,054 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:44909
-2022-08-26 14:10:16,054 - distributed.worker - INFO -           Worker name:                          4
-2022-08-26 14:10:16,054 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41757
-2022-08-26 14:10:16,054 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35331
-2022-08-26 14:10:16,054 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:16,054 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:10:16,054 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:10:16,054 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-_0unqvk6
-2022-08-26 14:10:16,054 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:16,054 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37241
-2022-08-26 14:10:16,055 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37241
-2022-08-26 14:10:16,055 - distributed.worker - INFO -           Worker name:                          5
-2022-08-26 14:10:16,055 - distributed.worker - INFO -          dashboard at:            127.0.0.1:37355
-2022-08-26 14:10:16,055 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35331
-2022-08-26 14:10:16,055 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:16,055 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:10:16,055 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:10:16,055 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-5t0en3f8
-2022-08-26 14:10:16,055 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:16,055 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45387
-2022-08-26 14:10:16,056 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45387
-2022-08-26 14:10:16,056 - distributed.worker - INFO -           Worker name:                          6
-2022-08-26 14:10:16,056 - distributed.worker - INFO -          dashboard at:            127.0.0.1:44561
-2022-08-26 14:10:16,056 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35331
-2022-08-26 14:10:16,056 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:16,056 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:10:16,056 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:10:16,056 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-yih8k22k
-2022-08-26 14:10:16,056 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:16,056 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33673
-2022-08-26 14:10:16,057 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33673
-2022-08-26 14:10:16,057 - distributed.worker - INFO -           Worker name:                          7
-2022-08-26 14:10:16,057 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42003
-2022-08-26 14:10:16,057 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35331
-2022-08-26 14:10:16,057 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:16,057 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:10:16,057 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:10:16,057 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-7pbb_31g
-2022-08-26 14:10:16,057 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:16,058 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35871
-2022-08-26 14:10:16,058 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35871
-2022-08-26 14:10:16,058 - distributed.worker - INFO -           Worker name:                          8
-2022-08-26 14:10:16,058 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46753
-2022-08-26 14:10:16,058 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35331
-2022-08-26 14:10:16,058 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:16,058 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:10:16,059 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:10:16,059 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-xg1rsrnw
-2022-08-26 14:10:16,059 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:16,059 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37155
-2022-08-26 14:10:16,059 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37155
-2022-08-26 14:10:16,059 - distributed.worker - INFO -           Worker name:                          9
-2022-08-26 14:10:16,059 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45421
-2022-08-26 14:10:16,059 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35331
-2022-08-26 14:10:16,059 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:16,059 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:10:16,059 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:10:16,060 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-iyw_3ios
-2022-08-26 14:10:16,060 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:16,070 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36265', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:10:16,070 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36265
-2022-08-26 14:10:16,070 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:16,070 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42373', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:10:16,071 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42373
-2022-08-26 14:10:16,071 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:16,071 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:41663', name: 2, status: init, memory: 0, processing: 0>
-2022-08-26 14:10:16,071 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:41663
-2022-08-26 14:10:16,071 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:16,072 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:44159', name: 3, status: init, memory: 0, processing: 0>
-2022-08-26 14:10:16,072 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:44159
-2022-08-26 14:10:16,072 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:16,072 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:44909', name: 4, status: init, memory: 0, processing: 0>
-2022-08-26 14:10:16,073 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:44909
-2022-08-26 14:10:16,073 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:16,073 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37241', name: 5, status: init, memory: 0, processing: 0>
-2022-08-26 14:10:16,073 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37241
-2022-08-26 14:10:16,073 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:16,074 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45387', name: 6, status: init, memory: 0, processing: 0>
-2022-08-26 14:10:16,074 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45387
-2022-08-26 14:10:16,074 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:16,074 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33673', name: 7, status: init, memory: 0, processing: 0>
-2022-08-26 14:10:16,074 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33673
-2022-08-26 14:10:16,075 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:16,075 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:35871', name: 8, status: init, memory: 0, processing: 0>
-2022-08-26 14:10:16,075 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:35871
-2022-08-26 14:10:16,075 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:16,075 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37155', name: 9, status: init, memory: 0, processing: 0>
-2022-08-26 14:10:16,076 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37155
-2022-08-26 14:10:16,076 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:16,076 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35331
-2022-08-26 14:10:16,076 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:16,077 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35331
-2022-08-26 14:10:16,077 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:16,077 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35331
-2022-08-26 14:10:16,077 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:16,077 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35331
-2022-08-26 14:10:16,077 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:16,077 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35331
-2022-08-26 14:10:16,078 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:16,078 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35331
-2022-08-26 14:10:16,078 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:16,078 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35331
-2022-08-26 14:10:16,078 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:16,078 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35331
-2022-08-26 14:10:16,078 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:16,079 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35331
-2022-08-26 14:10:16,079 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:16,079 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35331
-2022-08-26 14:10:16,079 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:16,079 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:16,079 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:16,079 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:16,080 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:16,080 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:16,080 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:16,080 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:16,080 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:16,080 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:16,080 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:16,094 - distributed.scheduler - INFO - Receive client connection: Client-7b7a7110-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:16,095 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:16,138 - distributed.scheduler - INFO - Client Client-7b7a7110-2583-11ed-a99d-00d861bc4509 requests to cancel 100 keys
-2022-08-26 14:10:16,139 - distributed.scheduler - INFO - Scheduler cancels key slowinc-ab468f6a5cefe978bb028c0c932d7df2.  Force=False
-2022-08-26 14:10:16,139 - distributed.scheduler - INFO - Scheduler cancels key slowinc-be129768278f88aea7a43c69488c206b.  Force=False
-2022-08-26 14:10:16,139 - distributed.scheduler - INFO - Scheduler cancels key slowinc-54f077c3b58ab876b3e909f2707212c1.  Force=False
-2022-08-26 14:10:16,140 - distributed.scheduler - INFO - Scheduler cancels key slowinc-a73d441cddfd65555931a29d4ef32c6b.  Force=False
-2022-08-26 14:10:16,140 - distributed.scheduler - INFO - Scheduler cancels key slowinc-1529fe3550640d954651ebf0e0cf3cbc.  Force=False
-2022-08-26 14:10:16,140 - distributed.scheduler - INFO - Scheduler cancels key slowinc-186e7f92d8cec896a71f13cc756655fa.  Force=False
-2022-08-26 14:10:16,140 - distributed.scheduler - INFO - Scheduler cancels key slowinc-e221db5a06e296a0cfb70cd76afaa8db.  Force=False
-2022-08-26 14:10:16,140 - distributed.scheduler - INFO - Scheduler cancels key slowinc-942a45d2bd5a4ad8ffadf4d6bb8bc8e4.  Force=False
-2022-08-26 14:10:16,140 - distributed.scheduler - INFO - Scheduler cancels key slowinc-94fec23a3a1d514aad2ffe9e1a06e220.  Force=False
-2022-08-26 14:10:16,140 - distributed.scheduler - INFO - Scheduler cancels key slowinc-fb38aefd9a197d359dca3ace96f7da2f.  Force=False
-2022-08-26 14:10:16,140 - distributed.scheduler - INFO - Scheduler cancels key slowinc-a997cf2891085c3fa6d8dca727201c29.  Force=False
-2022-08-26 14:10:16,140 - distributed.scheduler - INFO - Scheduler cancels key slowinc-5a1bfed57e51bfa30d90dc387c0fdeed.  Force=False
-2022-08-26 14:10:16,140 - distributed.scheduler - INFO - Scheduler cancels key slowinc-fba071057559d647ba35bc48ebfeecc4.  Force=False
-2022-08-26 14:10:16,141 - distributed.scheduler - INFO - Scheduler cancels key slowinc-60b4a6e8af9fb88ac64f71210b4a1368.  Force=False
-2022-08-26 14:10:16,141 - distributed.scheduler - INFO - Scheduler cancels key slowinc-b5c1372749e6858ec7d34ae90fa2d58a.  Force=False
-2022-08-26 14:10:16,141 - distributed.scheduler - INFO - Scheduler cancels key slowinc-3d010fe4dc98cf9065d995675ecca1a2.  Force=False
-2022-08-26 14:10:16,141 - distributed.scheduler - INFO - Scheduler cancels key slowinc-ec8de665b24eda96a8fbf36732f59324.  Force=False
-2022-08-26 14:10:16,141 - distributed.scheduler - INFO - Scheduler cancels key slowinc-4f34ca12ae4f75476dff2a8a0a557f3c.  Force=False
-2022-08-26 14:10:16,141 - distributed.scheduler - INFO - Scheduler cancels key slowinc-3ea05a92826148caccc90db9fe7bbb73.  Force=False
-2022-08-26 14:10:16,141 - distributed.scheduler - INFO - Scheduler cancels key slowinc-74cdca770422df77a8bcd1e65942370e.  Force=False
-2022-08-26 14:10:16,141 - distributed.scheduler - INFO - Scheduler cancels key slowinc-78163947d51bacf4172539b4fc4d414f.  Force=False
-2022-08-26 14:10:16,141 - distributed.scheduler - INFO - Scheduler cancels key slowinc-5d949c82e5f5a3a2b46566c0c7194700.  Force=False
-2022-08-26 14:10:16,141 - distributed.scheduler - INFO - Scheduler cancels key slowinc-9802b869dbb4ee0436d93a948dd913c1.  Force=False
-2022-08-26 14:10:16,142 - distributed.scheduler - INFO - Scheduler cancels key slowinc-dd3ce0983e80443b000d62269eb697f9.  Force=False
-2022-08-26 14:10:16,142 - distributed.scheduler - INFO - Scheduler cancels key slowinc-ed8fedc9bd30c5b891beb5108f88f0ce.  Force=False
-2022-08-26 14:10:16,142 - distributed.scheduler - INFO - Scheduler cancels key slowinc-3f3be703e102de4cf71052b3ae1b1142.  Force=False
-2022-08-26 14:10:16,142 - distributed.scheduler - INFO - Scheduler cancels key slowinc-397b24a808b58c4de1d1725561996eb2.  Force=False
-2022-08-26 14:10:16,142 - distributed.scheduler - INFO - Scheduler cancels key slowinc-45272af76e2b3c3a47e44548c86368e0.  Force=False
-2022-08-26 14:10:16,142 - distributed.scheduler - INFO - Scheduler cancels key slowinc-979db0dba9c3500177d9cfefbd95bc48.  Force=False
-2022-08-26 14:10:16,142 - distributed.scheduler - INFO - Scheduler cancels key slowinc-7d47e0ff561483c9438e2dfffe31277e.  Force=False
-2022-08-26 14:10:16,142 - distributed.scheduler - INFO - Scheduler cancels key slowinc-47ef0236748637ed9cd3ee1377acc01d.  Force=False
-2022-08-26 14:10:16,142 - distributed.scheduler - INFO - Scheduler cancels key slowinc-e05e0a800a3140e6f7dac4cd993a53a0.  Force=False
-2022-08-26 14:10:16,143 - distributed.scheduler - INFO - Scheduler cancels key slowinc-c128babaa90472f31f9177e828eaf4b8.  Force=False
-2022-08-26 14:10:16,143 - distributed.scheduler - INFO - Scheduler cancels key slowinc-58c4a4de45c5dd8a9bb0326225a525e3.  Force=False
-2022-08-26 14:10:16,143 - distributed.scheduler - INFO - Scheduler cancels key slowinc-f8f51086079d8fffbc52abaeef38ed06.  Force=False
-2022-08-26 14:10:16,143 - distributed.scheduler - INFO - Scheduler cancels key slowinc-4460bef37fedbf84ec367636974e7074.  Force=False
-2022-08-26 14:10:16,143 - distributed.scheduler - INFO - Scheduler cancels key slowinc-c1e9bad68cc25ee7cc034f6ed72e5551.  Force=False
-2022-08-26 14:10:16,143 - distributed.scheduler - INFO - Scheduler cancels key slowinc-b4a2ff9867f99c2487f52b6f8b51a192.  Force=False
-2022-08-26 14:10:16,143 - distributed.scheduler - INFO - Scheduler cancels key slowinc-f528d7f40fb727ea527f52015ffe9d9b.  Force=False
-2022-08-26 14:10:16,143 - distributed.scheduler - INFO - Scheduler cancels key slowinc-513d21a99fc552859defd415b5b99215.  Force=False
-2022-08-26 14:10:16,143 - distributed.scheduler - INFO - Scheduler cancels key slowinc-354dfabd00efae5ea979423b53efbcc6.  Force=False
-2022-08-26 14:10:16,143 - distributed.scheduler - INFO - Scheduler cancels key slowinc-ae931334b675fd5d3c04099db9d3f181.  Force=False
-2022-08-26 14:10:16,143 - distributed.scheduler - INFO - Scheduler cancels key slowinc-ed68b322aa20f3c317ec29accaab39de.  Force=False
-2022-08-26 14:10:16,144 - distributed.scheduler - INFO - Scheduler cancels key slowinc-c6fab42f67a741b8b93a9e1f460f50bf.  Force=False
-2022-08-26 14:10:16,144 - distributed.scheduler - INFO - Scheduler cancels key slowinc-c6bc0be7d1220d41d2e7bd5084e20d67.  Force=False
-2022-08-26 14:10:16,144 - distributed.scheduler - INFO - Scheduler cancels key slowinc-3ac9f09288299fa8b6b4f620d92aa387.  Force=False
-2022-08-26 14:10:16,144 - distributed.scheduler - INFO - Scheduler cancels key slowinc-a22b45eb58e0fc93838f76915c4c23cc.  Force=False
-2022-08-26 14:10:16,144 - distributed.scheduler - INFO - Scheduler cancels key slowinc-9393a294e973096da250ae88410b0e5f.  Force=False
-2022-08-26 14:10:16,144 - distributed.scheduler - INFO - Scheduler cancels key slowinc-61c7df7905f6fd7823511cbb6c488709.  Force=False
-2022-08-26 14:10:16,144 - distributed.scheduler - INFO - Scheduler cancels key slowinc-2a0c1b8bc59a90ae73f7ce799cbe6be1.  Force=False
-2022-08-26 14:10:16,144 - distributed.scheduler - INFO - Scheduler cancels key slowinc-8c7a2cf658ac290e6e12ef453d939b47.  Force=False
-2022-08-26 14:10:16,144 - distributed.scheduler - INFO - Scheduler cancels key slowinc-eebbab4a00779edf920c519214f871c1.  Force=False
-2022-08-26 14:10:16,144 - distributed.scheduler - INFO - Scheduler cancels key slowinc-34e5483142450207eedbc2ded8568d65.  Force=False
-2022-08-26 14:10:16,145 - distributed.scheduler - INFO - Scheduler cancels key slowinc-619e7d83b71ce96b7beaca9b3333f0d9.  Force=False
-2022-08-26 14:10:16,145 - distributed.scheduler - INFO - Scheduler cancels key slowinc-cf4a79516aea5175912b56cdf92d6edd.  Force=False
-2022-08-26 14:10:16,145 - distributed.scheduler - INFO - Scheduler cancels key slowinc-32e2c14c9674e86b532a82832f192a3b.  Force=False
-2022-08-26 14:10:16,145 - distributed.scheduler - INFO - Scheduler cancels key slowinc-b6ebca902cc83cc79ee32c60ef4bf8e0.  Force=False
-2022-08-26 14:10:16,145 - distributed.scheduler - INFO - Scheduler cancels key slowinc-47fea7cb04e0c6bd55a2f4ea6ce4b150.  Force=False
-2022-08-26 14:10:16,145 - distributed.scheduler - INFO - Scheduler cancels key slowinc-734008dc2d80dfa1e52269e9e70a3a08.  Force=False
-2022-08-26 14:10:16,145 - distributed.scheduler - INFO - Scheduler cancels key slowinc-310d857b9c6c94d719042d182cbf1680.  Force=False
-2022-08-26 14:10:16,145 - distributed.scheduler - INFO - Scheduler cancels key slowinc-916ff900d49abdfe4e45db63ff858d75.  Force=False
-2022-08-26 14:10:16,145 - distributed.scheduler - INFO - Scheduler cancels key slowinc-b1fbc66a0544b28dc75e486d91504274.  Force=False
-2022-08-26 14:10:16,145 - distributed.scheduler - INFO - Scheduler cancels key slowinc-a78dfeb6376398ffe14397fafe0af073.  Force=False
-2022-08-26 14:10:16,146 - distributed.scheduler - INFO - Scheduler cancels key slowinc-c3b108ed92a63f74c2ade0a1446d2dab.  Force=False
-2022-08-26 14:10:16,146 - distributed.scheduler - INFO - Scheduler cancels key slowinc-6f6ba664b9201b65ca81e71537abf929.  Force=False
-2022-08-26 14:10:16,146 - distributed.scheduler - INFO - Scheduler cancels key slowinc-9a329951769e038426fad4bace72cd8f.  Force=False
-2022-08-26 14:10:16,146 - distributed.scheduler - INFO - Scheduler cancels key slowinc-c997a9412d556df1e2d5de7658715493.  Force=False
-2022-08-26 14:10:16,146 - distributed.scheduler - INFO - Scheduler cancels key slowinc-5510c4b87492b10e25109f2e92a2dbb8.  Force=False
-2022-08-26 14:10:16,146 - distributed.scheduler - INFO - Scheduler cancels key slowinc-04347b8f51a3b3d4e7add2beda7d8e6e.  Force=False
-2022-08-26 14:10:16,146 - distributed.scheduler - INFO - Scheduler cancels key slowinc-e75fd2276628be22e786930e4125eabd.  Force=False
-2022-08-26 14:10:16,146 - distributed.scheduler - INFO - Scheduler cancels key slowinc-3428dd796985b82f4e66945db12e930a.  Force=False
-2022-08-26 14:10:16,146 - distributed.scheduler - INFO - Scheduler cancels key slowinc-f5bbd094cdbcae9df623fb99cd2b14ba.  Force=False
-2022-08-26 14:10:16,146 - distributed.scheduler - INFO - Scheduler cancels key slowinc-b26c9e266cffc28af4d34041090a419b.  Force=False
-2022-08-26 14:10:16,147 - distributed.scheduler - INFO - Scheduler cancels key slowinc-2124fd26f425a70c5cc03022ddd82cb5.  Force=False
-2022-08-26 14:10:16,147 - distributed.scheduler - INFO - Scheduler cancels key slowinc-1abe6beef5a972738b95fd429f282599.  Force=False
-2022-08-26 14:10:16,147 - distributed.scheduler - INFO - Scheduler cancels key slowinc-ddbbf4a3dd6b3def31edb6f3ee9096e8.  Force=False
-2022-08-26 14:10:16,147 - distributed.scheduler - INFO - Scheduler cancels key slowinc-aaaa0a9084a22887d96b31de82bda290.  Force=False
-2022-08-26 14:10:16,147 - distributed.scheduler - INFO - Scheduler cancels key slowinc-f98b24a2982e05547ff793162232995d.  Force=False
-2022-08-26 14:10:16,147 - distributed.scheduler - INFO - Scheduler cancels key slowinc-fecf2c95a75d1a6261ab9246089b2f56.  Force=False
-2022-08-26 14:10:16,147 - distributed.scheduler - INFO - Scheduler cancels key slowinc-61a1dfa592adfd13b52eb30fcb491492.  Force=False
-2022-08-26 14:10:16,147 - distributed.scheduler - INFO - Scheduler cancels key slowinc-16f6dc473633d23ba1bb0a75ad196a69.  Force=False
-2022-08-26 14:10:16,147 - distributed.scheduler - INFO - Scheduler cancels key slowinc-93edc54bec3d6b050fbb861669acaacc.  Force=False
-2022-08-26 14:10:16,147 - distributed.scheduler - INFO - Scheduler cancels key slowinc-a32b76e3cd635f2d4bbfbc35e4c31e04.  Force=False
-2022-08-26 14:10:16,148 - distributed.scheduler - INFO - Scheduler cancels key slowinc-93487e006b6a24b250351fe19a7feb28.  Force=False
-2022-08-26 14:10:16,148 - distributed.scheduler - INFO - Scheduler cancels key slowinc-0d444867a1c4518ec8bbd5b3b5d2afec.  Force=False
-2022-08-26 14:10:16,148 - distributed.scheduler - INFO - Scheduler cancels key slowinc-6df41eec0c4da9435c81c41eb12fdc45.  Force=False
-2022-08-26 14:10:16,148 - distributed.scheduler - INFO - Scheduler cancels key slowinc-31440eca9716a06f24f4ccbcd29a84f6.  Force=False
-2022-08-26 14:10:16,148 - distributed.scheduler - INFO - Scheduler cancels key slowinc-b66619b8b1ab734d952c91857de24f4e.  Force=False
-2022-08-26 14:10:16,148 - distributed.scheduler - INFO - Scheduler cancels key slowinc-263b8bcf13799645e1e30aec70867bdf.  Force=False
-2022-08-26 14:10:16,148 - distributed.scheduler - INFO - Scheduler cancels key slowinc-db4312f689cfac83491586a36a73e299.  Force=False
-2022-08-26 14:10:16,148 - distributed.scheduler - INFO - Scheduler cancels key slowinc-50db3f526f9ca9b8f4d2b978564b1c8b.  Force=False
-2022-08-26 14:10:16,148 - distributed.scheduler - INFO - Scheduler cancels key slowinc-d44393cbb7a374ee62dab428b27d70cf.  Force=False
-2022-08-26 14:10:16,148 - distributed.scheduler - INFO - Scheduler cancels key slowinc-e0688ea08135133254c890b4fbb3071c.  Force=False
-2022-08-26 14:10:16,149 - distributed.scheduler - INFO - Scheduler cancels key slowinc-bd1781e6aa33e869f9c72f64297b571f.  Force=False
-2022-08-26 14:10:16,149 - distributed.scheduler - INFO - Scheduler cancels key slowinc-0241f7d9c3f8f4a89cc81ba1cdef45f1.  Force=False
-2022-08-26 14:10:16,149 - distributed.scheduler - INFO - Scheduler cancels key slowinc-982a4927dde4fa74381dd48d60188667.  Force=False
-2022-08-26 14:10:16,149 - distributed.scheduler - INFO - Scheduler cancels key slowinc-1b040c1bc79716204b29d54c217521b1.  Force=False
-2022-08-26 14:10:16,149 - distributed.scheduler - INFO - Scheduler cancels key slowinc-1accf2e562cfb25ae37418f1f220535c.  Force=False
-2022-08-26 14:10:16,149 - distributed.scheduler - INFO - Scheduler cancels key slowinc-678c11824dc382ba4e23d6d0066cd168.  Force=False
-2022-08-26 14:10:16,149 - distributed.scheduler - INFO - Scheduler cancels key slowinc-501bed35d0509d7dd907cad4e0e00728.  Force=False
-2022-08-26 14:10:16,256 - distributed.scheduler - INFO - Remove client Client-7b7a7110-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:16,256 - distributed.scheduler - INFO - Remove client Client-7b7a7110-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:16,256 - distributed.scheduler - INFO - Close client connection: Client-7b7a7110-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:16,257 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36265
-2022-08-26 14:10:16,257 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42373
-2022-08-26 14:10:16,258 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:41663
-2022-08-26 14:10:16,258 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:44159
-2022-08-26 14:10:16,258 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:44909
-2022-08-26 14:10:16,258 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:37241
-2022-08-26 14:10:16,259 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:45387
-2022-08-26 14:10:16,259 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:33673
-2022-08-26 14:10:16,259 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:35871
-2022-08-26 14:10:16,259 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:37155
-2022-08-26 14:10:16,262 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:36265', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:10:16,262 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:36265
-2022-08-26 14:10:16,262 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:42373', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:10:16,263 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:42373
-2022-08-26 14:10:16,263 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:41663', name: 2, status: closing, memory: 0, processing: 0>
-2022-08-26 14:10:16,263 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:41663
-2022-08-26 14:10:16,263 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:44159', name: 3, status: closing, memory: 0, processing: 0>
-2022-08-26 14:10:16,263 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:44159
-2022-08-26 14:10:16,263 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:44909', name: 4, status: closing, memory: 0, processing: 0>
-2022-08-26 14:10:16,263 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:44909
-2022-08-26 14:10:16,263 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:37241', name: 5, status: closing, memory: 0, processing: 0>
-2022-08-26 14:10:16,263 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:37241
-2022-08-26 14:10:16,264 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:45387', name: 6, status: closing, memory: 0, processing: 0>
-2022-08-26 14:10:16,264 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:45387
-2022-08-26 14:10:16,264 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:33673', name: 7, status: closing, memory: 0, processing: 0>
-2022-08-26 14:10:16,264 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:33673
-2022-08-26 14:10:16,264 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:35871', name: 8, status: closing, memory: 0, processing: 0>
-2022-08-26 14:10:16,264 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:35871
-2022-08-26 14:10:16,264 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:37155', name: 9, status: closing, memory: 0, processing: 0>
-2022-08-26 14:10:16,264 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:37155
-2022-08-26 14:10:16,264 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:10:16,264 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-0cb241dc-3d25-48f3-aaac-57b968222662 Address tcp://127.0.0.1:36265 Status: Status.closing
-2022-08-26 14:10:16,265 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-48f856bd-12c0-40d5-81ad-adc6ac3d0c01 Address tcp://127.0.0.1:42373 Status: Status.closing
-2022-08-26 14:10:16,265 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-c45c479e-3a0a-44ac-9411-b2d73a2f953a Address tcp://127.0.0.1:41663 Status: Status.closing
-2022-08-26 14:10:16,265 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-3db7087d-990b-4923-a547-e31306c7fc5d Address tcp://127.0.0.1:44159 Status: Status.closing
-2022-08-26 14:10:16,266 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-8622161a-a0ef-42ed-b2c2-802484d7e339 Address tcp://127.0.0.1:44909 Status: Status.closing
-2022-08-26 14:10:16,266 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-d694cd3e-91ce-4326-b924-a15b51e87d4b Address tcp://127.0.0.1:37241 Status: Status.closing
-2022-08-26 14:10:16,266 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-4f87fb76-21d6-4484-be59-2b9a63302028 Address tcp://127.0.0.1:45387 Status: Status.closing
-2022-08-26 14:10:16,266 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-45f4a9a4-8a07-4814-8fbf-5e2f5456b9a1 Address tcp://127.0.0.1:33673 Status: Status.closing
-2022-08-26 14:10:16,266 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-2709a6d9-6f90-4908-b4c1-2689caf06360 Address tcp://127.0.0.1:35871 Status: Status.closing
-2022-08-26 14:10:16,267 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-78575beb-3471-46c8-9286-afacee0b9972 Address tcp://127.0.0.1:37155 Status: Status.closing
-2022-08-26 14:10:16,271 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:10:16,272 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:10:16,489 - distributed.utils_perf - WARNING - full garbage collections took 79% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_set_restrictions 2022-08-26 14:10:16,496 - distributed.scheduler - INFO - State start
-2022-08-26 14:10:16,498 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:10:16,498 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:33225
-2022-08-26 14:10:16,498 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:42673
-2022-08-26 14:10:16,503 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45897
-2022-08-26 14:10:16,503 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45897
-2022-08-26 14:10:16,503 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:10:16,503 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35043
-2022-08-26 14:10:16,503 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33225
-2022-08-26 14:10:16,503 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:16,503 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:10:16,503 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:10:16,503 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-adoodhaw
-2022-08-26 14:10:16,503 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:16,504 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42017
-2022-08-26 14:10:16,504 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42017
-2022-08-26 14:10:16,504 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:10:16,504 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40023
-2022-08-26 14:10:16,504 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33225
-2022-08-26 14:10:16,504 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:16,504 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:10:16,504 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:10:16,504 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-0yusoqls
-2022-08-26 14:10:16,504 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:16,507 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45897', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:10:16,507 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45897
-2022-08-26 14:10:16,508 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:16,508 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42017', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:10:16,508 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42017
-2022-08-26 14:10:16,508 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:16,508 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33225
-2022-08-26 14:10:16,508 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:16,509 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33225
-2022-08-26 14:10:16,509 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:16,509 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:16,509 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:16,523 - distributed.scheduler - INFO - Receive client connection: Client-7bbbd30b-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:16,523 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:16,538 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42017
-2022-08-26 14:10:16,538 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:42017', name: 1, status: closing, memory: 1, processing: 0>
-2022-08-26 14:10:16,538 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:42017
-2022-08-26 14:10:16,539 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-2c345ecc-1022-49e4-a5a9-42b171228066 Address tcp://127.0.0.1:42017 Status: Status.closing
-2022-08-26 14:10:16,555 - distributed.scheduler - INFO - Remove client Client-7bbbd30b-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:16,555 - distributed.scheduler - INFO - Remove client Client-7bbbd30b-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:16,555 - distributed.scheduler - INFO - Close client connection: Client-7bbbd30b-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:16,555 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:45897
-2022-08-26 14:10:16,556 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:45897', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:10:16,556 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:45897
-2022-08-26 14:10:16,556 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:10:16,556 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-acf002e9-2c21-4acb-88da-3d8d116cdd03 Address tcp://127.0.0.1:45897 Status: Status.closing
-2022-08-26 14:10:16,557 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:10:16,557 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:10:16,774 - distributed.utils_perf - WARNING - full garbage collections took 79% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_avoid_paused_workers 2022-08-26 14:10:16,780 - distributed.scheduler - INFO - State start
-2022-08-26 14:10:16,781 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:10:16,782 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:34217
-2022-08-26 14:10:16,782 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:40775
-2022-08-26 14:10:16,788 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39355
-2022-08-26 14:10:16,788 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39355
-2022-08-26 14:10:16,788 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:10:16,788 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33205
-2022-08-26 14:10:16,788 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34217
-2022-08-26 14:10:16,788 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:16,788 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:10:16,788 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:10:16,788 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-muoxojxc
-2022-08-26 14:10:16,788 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:16,789 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33097
-2022-08-26 14:10:16,789 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33097
-2022-08-26 14:10:16,789 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:10:16,789 - distributed.worker - INFO -          dashboard at:            127.0.0.1:44127
-2022-08-26 14:10:16,789 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34217
-2022-08-26 14:10:16,789 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:16,789 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:10:16,789 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:10:16,789 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-0jgx3_5b
-2022-08-26 14:10:16,789 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:16,790 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42353
-2022-08-26 14:10:16,790 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42353
-2022-08-26 14:10:16,790 - distributed.worker - INFO -           Worker name:                          2
-2022-08-26 14:10:16,790 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41393
-2022-08-26 14:10:16,790 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34217
-2022-08-26 14:10:16,790 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:16,790 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:10:16,790 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:10:16,790 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-1dm02ct0
-2022-08-26 14:10:16,790 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:16,794 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39355', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:10:16,794 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39355
-2022-08-26 14:10:16,794 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:16,795 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33097', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:10:16,795 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33097
-2022-08-26 14:10:16,795 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:16,795 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42353', name: 2, status: init, memory: 0, processing: 0>
-2022-08-26 14:10:16,796 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42353
-2022-08-26 14:10:16,796 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:16,796 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34217
-2022-08-26 14:10:16,796 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:16,796 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34217
-2022-08-26 14:10:16,796 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:16,797 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34217
-2022-08-26 14:10:16,797 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:16,797 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:16,797 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:16,797 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:16,811 - distributed.scheduler - INFO - Receive client connection: Client-7be7cb61-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:16,811 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:17,247 - distributed.scheduler - INFO - Remove client Client-7be7cb61-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:17,248 - distributed.scheduler - INFO - Remove client Client-7be7cb61-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:17,248 - distributed.scheduler - INFO - Close client connection: Client-7be7cb61-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:17,248 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:39355
-2022-08-26 14:10:17,249 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:33097
-2022-08-26 14:10:17,249 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42353
-2022-08-26 14:10:17,250 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:39355', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:10:17,250 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:39355
-2022-08-26 14:10:17,250 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:33097', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:10:17,250 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:33097
-2022-08-26 14:10:17,250 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:42353', name: 2, status: closing, memory: 0, processing: 0>
-2022-08-26 14:10:17,251 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:42353
-2022-08-26 14:10:17,251 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:10:17,251 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-e92ae8e2-b421-4e60-8030-88340f50c16a Address tcp://127.0.0.1:39355 Status: Status.closing
-2022-08-26 14:10:17,251 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-6a3857bd-2e77-43c9-a530-d5aa22d51495 Address tcp://127.0.0.1:33097 Status: Status.closing
-2022-08-26 14:10:17,251 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-f5a0a787-2d86-407c-a044-df92dcc232f5 Address tcp://127.0.0.1:42353 Status: Status.closing
-2022-08-26 14:10:17,253 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:10:17,253 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:10:17,466 - distributed.utils_perf - WARNING - full garbage collections took 79% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_Scheduler__to_dict 2022-08-26 14:10:17,473 - distributed.scheduler - INFO - State start
-2022-08-26 14:10:17,474 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:10:17,474 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:44327
-2022-08-26 14:10:17,474 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:42339
-2022-08-26 14:10:17,477 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38139
-2022-08-26 14:10:17,477 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38139
-2022-08-26 14:10:17,477 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:10:17,477 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40393
-2022-08-26 14:10:17,477 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44327
-2022-08-26 14:10:17,477 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:17,477 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:10:17,477 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:10:17,477 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ndrtm7ui
-2022-08-26 14:10:17,478 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:17,479 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38139', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:10:17,480 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38139
-2022-08-26 14:10:17,480 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:17,480 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44327
-2022-08-26 14:10:17,480 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:17,480 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:17,493 - distributed.scheduler - INFO - Receive client connection: Client-7c4ff37c-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:17,494 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:17,517 - distributed.scheduler - INFO - Remove client Client-7c4ff37c-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:17,517 - distributed.scheduler - INFO - Remove client Client-7c4ff37c-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:17,517 - distributed.scheduler - INFO - Close client connection: Client-7c4ff37c-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:17,518 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:38139
-2022-08-26 14:10:17,519 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-e9350f2e-b225-4eb2-99bb-d69f64e532d3 Address tcp://127.0.0.1:38139 Status: Status.closing
-2022-08-26 14:10:17,519 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:38139', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:10:17,519 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:38139
-2022-08-26 14:10:17,519 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:10:17,520 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:10:17,520 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:10:17,733 - distributed.utils_perf - WARNING - full garbage collections took 80% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_TaskState__to_dict 2022-08-26 14:10:17,739 - distributed.scheduler - INFO - State start
-2022-08-26 14:10:17,741 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:10:17,741 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:43461
-2022-08-26 14:10:17,741 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:33489
-2022-08-26 14:10:17,744 - distributed.scheduler - INFO - Receive client connection: Client-7c762f5b-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:17,744 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:17,767 - distributed.scheduler - INFO - Remove client Client-7c762f5b-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:17,767 - distributed.scheduler - INFO - Remove client Client-7c762f5b-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:17,767 - distributed.scheduler - INFO - Close client connection: Client-7c762f5b-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:17,767 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:10:17,768 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:10:17,980 - distributed.utils_perf - WARNING - full garbage collections took 80% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_get_cluster_state 2022-08-26 14:10:17,985 - distributed.scheduler - INFO - State start
-2022-08-26 14:10:17,987 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:10:17,987 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:33481
-2022-08-26 14:10:17,987 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:34055
-2022-08-26 14:10:17,991 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42805
-2022-08-26 14:10:17,991 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42805
-2022-08-26 14:10:17,991 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:10:17,992 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40083
-2022-08-26 14:10:17,992 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33481
-2022-08-26 14:10:17,992 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:17,992 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:10:17,992 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:10:17,992 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ub7nxtij
-2022-08-26 14:10:17,992 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:17,992 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38513
-2022-08-26 14:10:17,992 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38513
-2022-08-26 14:10:17,992 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:10:17,992 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41113
-2022-08-26 14:10:17,993 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33481
-2022-08-26 14:10:17,993 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:17,993 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:10:17,993 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:10:17,993 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-e4mpc4ju
-2022-08-26 14:10:17,993 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:17,996 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42805', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:10:17,996 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42805
-2022-08-26 14:10:17,996 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:17,996 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38513', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:10:17,996 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38513
-2022-08-26 14:10:17,997 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:17,997 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33481
-2022-08-26 14:10:17,997 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:17,997 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33481
-2022-08-26 14:10:17,997 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:17,997 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:17,997 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:18,018 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42805
-2022-08-26 14:10:18,018 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:38513
-2022-08-26 14:10:18,019 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:42805', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:10:18,019 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:42805
-2022-08-26 14:10:18,019 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:38513', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:10:18,019 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:38513
-2022-08-26 14:10:18,019 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:10:18,019 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-b3731e97-689d-4bad-aa7b-e88e25b240b5 Address tcp://127.0.0.1:42805 Status: Status.closing
-2022-08-26 14:10:18,020 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-dbc18687-2af4-40ef-b6d4-766724aeba3c Address tcp://127.0.0.1:38513 Status: Status.closing
-2022-08-26 14:10:18,022 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:10:18,022 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:10:18,235 - distributed.utils_perf - WARNING - full garbage collections took 82% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_get_cluster_state_worker_error 2022-08-26 14:10:18,240 - distributed.scheduler - INFO - State start
-2022-08-26 14:10:18,242 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:10:18,242 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:34199
-2022-08-26 14:10:18,242 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:34265
-2022-08-26 14:10:18,247 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42027
-2022-08-26 14:10:18,247 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42027
-2022-08-26 14:10:18,247 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:10:18,247 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40677
-2022-08-26 14:10:18,247 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34199
-2022-08-26 14:10:18,247 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:18,247 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:10:18,247 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:10:18,247 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-fzyt68ib
-2022-08-26 14:10:18,247 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:18,248 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42585
-2022-08-26 14:10:18,248 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42585
-2022-08-26 14:10:18,248 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:10:18,248 - distributed.worker - INFO -          dashboard at:            127.0.0.1:44149
-2022-08-26 14:10:18,248 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34199
-2022-08-26 14:10:18,248 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:18,248 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:10:18,248 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:10:18,248 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-zedkv5b0
-2022-08-26 14:10:18,248 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:18,251 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42027', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:10:18,251 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42027
-2022-08-26 14:10:18,251 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:18,252 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42585', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:10:18,252 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42585
-2022-08-26 14:10:18,252 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:18,252 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34199
-2022-08-26 14:10:18,252 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:18,252 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34199
-2022-08-26 14:10:18,252 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:18,253 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:18,253 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:18,465 - distributed.scheduler - ERROR - broadcast to tcp://127.0.0.1:42027 failed: OSError: Timed out trying to connect to tcp://127.0.0.1:42027 after 0.2 s
-2022-08-26 14:10:18,465 - distributed.scheduler - ERROR - broadcast to tcp://127.0.0.1:42027 failed: OSError: Timed out trying to connect to tcp://127.0.0.1:42027 after 0.2 s
-2022-08-26 14:10:18,466 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42027
-2022-08-26 14:10:18,467 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42585
-2022-08-26 14:10:18,467 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:42027', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:10:18,467 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:42027
-2022-08-26 14:10:18,468 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:42585', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:10:18,468 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:42585
-2022-08-26 14:10:18,468 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:10:18,468 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-dc2c3dad-8a61-4a74-8d9a-37d9e58c21cb Address tcp://127.0.0.1:42027 Status: Status.closing
-2022-08-26 14:10:18,468 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-45432a82-19f3-4185-9c6d-605ef5368991 Address tcp://127.0.0.1:42585 Status: Status.closing
-2022-08-26 14:10:18,469 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:10:18,469 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:10:18,682 - distributed.utils_perf - WARNING - full garbage collections took 82% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_dump_cluster_state[msgpack] 2022-08-26 14:10:18,688 - distributed.scheduler - INFO - State start
-2022-08-26 14:10:18,690 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:10:18,690 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:33907
-2022-08-26 14:10:18,690 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:33947
-2022-08-26 14:10:18,694 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43671
-2022-08-26 14:10:18,695 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43671
-2022-08-26 14:10:18,695 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:10:18,695 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42687
-2022-08-26 14:10:18,695 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33907
-2022-08-26 14:10:18,695 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:18,695 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:10:18,695 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:10:18,695 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-mv2g21ty
-2022-08-26 14:10:18,695 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:18,695 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34761
-2022-08-26 14:10:18,696 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34761
-2022-08-26 14:10:18,696 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:10:18,696 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41121
-2022-08-26 14:10:18,696 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33907
-2022-08-26 14:10:18,696 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:18,696 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:10:18,696 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:10:18,696 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-uwn9xxux
-2022-08-26 14:10:18,696 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:18,699 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:43671', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:10:18,699 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:43671
-2022-08-26 14:10:18,699 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:18,699 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:34761', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:10:18,700 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:34761
-2022-08-26 14:10:18,700 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:18,700 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33907
-2022-08-26 14:10:18,700 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:18,700 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33907
-2022-08-26 14:10:18,700 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:18,701 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:18,701 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:18,722 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:43671
-2022-08-26 14:10:18,722 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:34761
-2022-08-26 14:10:18,723 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:43671', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:10:18,723 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:43671
-2022-08-26 14:10:18,723 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:34761', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:10:18,724 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:34761
-2022-08-26 14:10:18,724 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:10:18,724 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-53117386-fe98-4af2-a314-193f45208504 Address tcp://127.0.0.1:43671 Status: Status.closing
-2022-08-26 14:10:18,724 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-8f814106-4ee2-4c28-ab49-3ea7dbaa5d40 Address tcp://127.0.0.1:34761 Status: Status.closing
-2022-08-26 14:10:18,726 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:10:18,727 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:10:18,940 - distributed.utils_perf - WARNING - full garbage collections took 82% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_dump_cluster_state[yaml] 2022-08-26 14:10:18,946 - distributed.scheduler - INFO - State start
-2022-08-26 14:10:18,947 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:10:18,947 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:36303
-2022-08-26 14:10:18,947 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:36443
-2022-08-26 14:10:18,952 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38271
-2022-08-26 14:10:18,952 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38271
-2022-08-26 14:10:18,952 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:10:18,952 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42363
-2022-08-26 14:10:18,952 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36303
-2022-08-26 14:10:18,952 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:18,952 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:10:18,952 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:10:18,952 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-uwxt33jc
-2022-08-26 14:10:18,952 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:18,953 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37951
-2022-08-26 14:10:18,953 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37951
-2022-08-26 14:10:18,953 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:10:18,953 - distributed.worker - INFO -          dashboard at:            127.0.0.1:37459
-2022-08-26 14:10:18,953 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36303
-2022-08-26 14:10:18,953 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:18,953 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:10:18,953 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:10:18,953 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-58pm6c4b
-2022-08-26 14:10:18,953 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:18,956 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38271', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:10:18,956 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38271
-2022-08-26 14:10:18,956 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:18,957 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37951', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:10:18,957 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37951
-2022-08-26 14:10:18,957 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:18,957 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36303
-2022-08-26 14:10:18,957 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:18,958 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36303
-2022-08-26 14:10:18,958 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:18,958 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:18,958 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:19,144 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:38271
-2022-08-26 14:10:19,145 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:37951
-2022-08-26 14:10:19,146 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:38271', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:10:19,146 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:38271
-2022-08-26 14:10:19,146 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:37951', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:10:19,146 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:37951
-2022-08-26 14:10:19,146 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:10:19,146 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-d0344115-709c-49bd-8ed3-0471e93cdcd0 Address tcp://127.0.0.1:38271 Status: Status.closing
-2022-08-26 14:10:19,146 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-a4938712-0b6c-4626-a42c-a257487983a9 Address tcp://127.0.0.1:37951 Status: Status.closing
-2022-08-26 14:10:19,188 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:10:19,189 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:10:19,401 - distributed.utils_perf - WARNING - full garbage collections took 81% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_idempotent_plugins 2022-08-26 14:10:19,407 - distributed.scheduler - INFO - State start
-2022-08-26 14:10:19,409 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:10:19,409 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:43113
-2022-08-26 14:10:19,409 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:35671
-2022-08-26 14:10:19,410 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:10:19,410 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:10:19,623 - distributed.utils_perf - WARNING - full garbage collections took 82% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_non_idempotent_plugins 2022-08-26 14:10:19,628 - distributed.scheduler - INFO - State start
-2022-08-26 14:10:19,630 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:10:19,630 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:37837
-2022-08-26 14:10:19,630 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:39561
-2022-08-26 14:10:19,631 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:10:19,631 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:10:19,842 - distributed.utils_perf - WARNING - full garbage collections took 82% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_repr 2022-08-26 14:10:19,848 - distributed.scheduler - INFO - State start
-2022-08-26 14:10:19,849 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:10:19,850 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:42993
-2022-08-26 14:10:19,850 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:40539
-2022-08-26 14:10:19,852 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34667
-2022-08-26 14:10:19,852 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34667
-2022-08-26 14:10:19,853 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:10:19,853 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40889
-2022-08-26 14:10:19,853 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:42993
-2022-08-26 14:10:19,853 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:19,853 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:10:19,853 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:10:19,853 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-rq2k6ipo
-2022-08-26 14:10:19,853 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:19,855 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:34667', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:10:19,855 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:34667
-2022-08-26 14:10:19,855 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:19,855 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:42993
-2022-08-26 14:10:19,855 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:19,855 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:19,868 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42953
-2022-08-26 14:10:19,868 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42953
-2022-08-26 14:10:19,868 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33977
-2022-08-26 14:10:19,868 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:42993
-2022-08-26 14:10:19,868 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:19,869 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:10:19,869 - distributed.worker - INFO -                Memory:                  10.47 GiB
-2022-08-26 14:10:19,869 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-j7l1g3z3
-2022-08-26 14:10:19,869 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:19,870 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42953', status: init, memory: 0, processing: 0>
-2022-08-26 14:10:19,871 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42953
-2022-08-26 14:10:19,871 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:19,871 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:42993
-2022-08-26 14:10:19,871 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:19,871 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:19,882 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42953
-2022-08-26 14:10:19,882 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:42953', status: closing, memory: 0, processing: 0>
-2022-08-26 14:10:19,882 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:42953
-2022-08-26 14:10:19,883 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-b8925544-5272-49e4-9031-d3976b46c930 Address tcp://127.0.0.1:42953 Status: Status.closing
-2022-08-26 14:10:19,883 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:34667
-2022-08-26 14:10:19,884 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:34667', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:10:19,884 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:34667
-2022-08-26 14:10:19,884 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:10:19,884 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-b370c174-214c-4db4-be64-7b5831baaaec Address tcp://127.0.0.1:34667 Status: Status.closing
-2022-08-26 14:10:19,885 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:10:19,885 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:10:20,097 - distributed.utils_perf - WARNING - full garbage collections took 82% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_ensure_events_dont_include_taskstate_objects 2022-08-26 14:10:20,103 - distributed.scheduler - INFO - State start
-2022-08-26 14:10:20,104 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:10:20,104 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:46631
-2022-08-26 14:10:20,104 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:41419
-2022-08-26 14:10:20,109 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42043
-2022-08-26 14:10:20,109 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42043
-2022-08-26 14:10:20,109 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:10:20,109 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46237
-2022-08-26 14:10:20,109 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:46631
-2022-08-26 14:10:20,109 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:20,109 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:10:20,109 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:10:20,109 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ap0287j2
-2022-08-26 14:10:20,109 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:20,110 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45381
-2022-08-26 14:10:20,110 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45381
-2022-08-26 14:10:20,110 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:10:20,110 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43191
-2022-08-26 14:10:20,110 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:46631
-2022-08-26 14:10:20,110 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:20,110 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:10:20,110 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:10:20,110 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-v8d1usap
-2022-08-26 14:10:20,110 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:20,113 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42043', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:10:20,113 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42043
-2022-08-26 14:10:20,113 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:20,114 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45381', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:10:20,114 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45381
-2022-08-26 14:10:20,114 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:20,114 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:46631
-2022-08-26 14:10:20,114 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:20,114 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:46631
-2022-08-26 14:10:20,114 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:20,115 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:20,115 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:20,128 - distributed.scheduler - INFO - Receive client connection: Client-7de2036e-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:20,129 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:20,236 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42043
-2022-08-26 14:10:20,236 - distributed.worker - INFO - Not waiting on executor to close
-2022-08-26 14:10:20,237 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-90078cf4-d50b-46a8-81e3-c5008f7e718d Address tcp://127.0.0.1:42043 Status: Status.closing
-2022-08-26 14:10:20,237 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:42043', name: 0, status: closing, memory: 0, processing: 34>
-2022-08-26 14:10:20,237 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:42043
-2022-08-26 14:10:20,383 - distributed.scheduler - INFO - Remove client Client-7de2036e-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:20,383 - distributed.scheduler - INFO - Remove client Client-7de2036e-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:20,383 - distributed.scheduler - INFO - Close client connection: Client-7de2036e-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:20,384 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:45381
-2022-08-26 14:10:20,385 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:45381', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:10:20,385 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:45381
-2022-08-26 14:10:20,385 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:10:20,385 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-5519ffac-59fb-4bbc-bf33-b12437a840d3 Address tcp://127.0.0.1:45381 Status: Status.closing
-2022-08-26 14:10:20,385 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:10:20,386 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:10:20,602 - distributed.utils_perf - WARNING - full garbage collections took 81% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_worker_state_unique_regardless_of_address 2022-08-26 14:10:20,608 - distributed.scheduler - INFO - State start
-2022-08-26 14:10:20,610 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:10:20,610 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:39115
-2022-08-26 14:10:20,610 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:42587
-2022-08-26 14:10:20,613 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45517
-2022-08-26 14:10:20,613 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45517
-2022-08-26 14:10:20,613 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:10:20,613 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40637
-2022-08-26 14:10:20,613 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39115
-2022-08-26 14:10:20,613 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:20,613 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:10:20,613 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:10:20,613 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-z8qq82fe
-2022-08-26 14:10:20,613 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:20,615 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45517', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:10:20,615 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45517
-2022-08-26 14:10:20,615 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:20,616 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39115
-2022-08-26 14:10:20,616 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:20,616 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:20,626 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:45517
-2022-08-26 14:10:20,627 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:45517', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:10:20,627 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:45517
-2022-08-26 14:10:20,627 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:10:20,627 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-bfd316f5-95c4-4962-b686-0efd38264b6f Address tcp://127.0.0.1:45517 Status: Status.closing
-2022-08-26 14:10:20,630 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45517
-2022-08-26 14:10:20,630 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45517
-2022-08-26 14:10:20,630 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35885
-2022-08-26 14:10:20,630 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39115
-2022-08-26 14:10:20,630 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:20,631 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:10:20,631 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:10:20,631 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-e4xi8ion
-2022-08-26 14:10:20,631 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:20,632 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45517', status: init, memory: 0, processing: 0>
-2022-08-26 14:10:20,633 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45517
-2022-08-26 14:10:20,633 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:20,633 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39115
-2022-08-26 14:10:20,633 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:20,633 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:45517
-2022-08-26 14:10:20,634 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:20,634 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-62d506ed-39b2-45af-81e0-edccfc4cff46 Address tcp://127.0.0.1:45517 Status: Status.closing
-2022-08-26 14:10:20,634 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:45517', status: closing, memory: 0, processing: 0>
-2022-08-26 14:10:20,634 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:45517
-2022-08-26 14:10:20,634 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:10:20,635 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:10:20,635 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:10:20,854 - distributed.utils_perf - WARNING - full garbage collections took 81% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_scheduler_close_fast_deprecated 2022-08-26 14:10:20,860 - distributed.scheduler - INFO - State start
-2022-08-26 14:10:20,862 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:10:20,862 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:41433
-2022-08-26 14:10:20,862 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:35807
-2022-08-26 14:10:20,865 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34643
-2022-08-26 14:10:20,865 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34643
-2022-08-26 14:10:20,865 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:10:20,865 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34309
-2022-08-26 14:10:20,865 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:41433
-2022-08-26 14:10:20,865 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:20,865 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:10:20,865 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:10:20,865 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-53vwpbge
-2022-08-26 14:10:20,865 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:20,867 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:34643', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:10:20,867 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:34643
-2022-08-26 14:10:20,867 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:20,868 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:41433
-2022-08-26 14:10:20,868 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:20,868 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:20,879 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:10:20,879 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:10:20,879 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:34643', name: 0, status: running, memory: 0, processing: 0>
-2022-08-26 14:10:20,879 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:34643
-2022-08-26 14:10:20,879 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:10:20,879 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:34643
-2022-08-26 14:10:20,880 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-327f6393-7513-4d25-a15e-27ce446a554c Address tcp://127.0.0.1:34643 Status: Status.closing
-2022-08-26 14:10:20,880 - distributed.batched - INFO - Batched Comm Closed <TCP (closed) Worker->Scheduler local=tcp://127.0.0.1:59072 remote=tcp://127.0.0.1:41433>
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/batched.py", line 115, in _background_send
-    nbytes = yield coro
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/tornado/gen.py", line 769, in run
-    value = future.result()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 269, in write
-    raise CommClosedError()
-distributed.comm.core.CommClosedError
-2022-08-26 14:10:21,094 - distributed.utils_perf - WARNING - full garbage collections took 81% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_scheduler.py::test_runspec_regression_sync PASSED
-distributed/tests/test_security.py::test_defaults PASSED
-distributed/tests/test_security.py::test_constructor_errors PASSED
-distributed/tests/test_security.py::test_attribute_error PASSED
-distributed/tests/test_security.py::test_from_config PASSED
-distributed/tests/test_security.py::test_min_max_version_from_config[None-None] PASSED
-distributed/tests/test_security.py::test_min_max_version_from_config[None-1.2] PASSED
-distributed/tests/test_security.py::test_min_max_version_from_config[None-1.3] PASSED
-distributed/tests/test_security.py::test_min_max_version_from_config[1.2-None] PASSED
-distributed/tests/test_security.py::test_min_max_version_from_config[1.2-1.2] PASSED
-distributed/tests/test_security.py::test_min_max_version_from_config[1.2-1.3] PASSED
-distributed/tests/test_security.py::test_min_max_version_from_config[1.3-None] PASSED
-distributed/tests/test_security.py::test_min_max_version_from_config[1.3-1.2] PASSED
-distributed/tests/test_security.py::test_min_max_version_from_config[1.3-1.3] PASSED
-distributed/tests/test_security.py::test_min_max_version_config_errors[min-version] PASSED
-distributed/tests/test_security.py::test_min_max_version_config_errors[max-version] PASSED
-distributed/tests/test_security.py::test_invalid_min_version_from_config_errors PASSED
-distributed/tests/test_security.py::test_kwargs PASSED
-distributed/tests/test_security.py::test_min_max_version_kwarg_errors[tls_min_version] PASSED
-distributed/tests/test_security.py::test_min_max_version_kwarg_errors[tls_max_version] PASSED
-distributed/tests/test_security.py::test_repr_temp_keys PASSED
-distributed/tests/test_security.py::test_repr_local_keys PASSED
-distributed/tests/test_security.py::test_tls_config_for_role PASSED
-distributed/tests/test_security.py::test_connection_args PASSED
-distributed/tests/test_security.py::test_extra_conn_args_connection_args PASSED
-distributed/tests/test_security.py::test_listen_args PASSED
-distributed/tests/test_security.py::test_tls_listen_connect 2022-08-26 14:10:22,835 - tornado.application - ERROR - Exception in callback functools.partial(<function TCPServer._handle_connection.<locals>.<lambda> at 0x56404125b870>, <Task finished name='Task-173083' coro=<BaseTCPListener._handle_stream() done, defined at /home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py:588> exception=CommClosedError('in <TLS (closed)  local=tls://192.168.1.159:43193 remote=tls://192.168.1.159:36066>: Stream is closed')>)
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 317, in write
-    raise StreamClosedError()
-tornado.iostream.StreamClosedError: Stream is closed
-
-The above exception was the direct cause of the following exception:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/tornado/ioloop.py", line 740, in _run_callback
-    ret = callback()
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/tornado/tcpserver.py", line 391, in <lambda>
-    gen.convert_yielded(future), lambda f: f.result()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 605, in _handle_stream
-    await self.comm_handler(comm)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/tests/test_security.py", line 345, in handle_comm
-    await comm.write("hello")
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 328, in write
-    convert_stream_closed_error(self, e)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 144, in convert_stream_closed_error
-    raise CommClosedError(f"in {obj}: {exc}") from exc
-distributed.comm.core.CommClosedError: in <TLS (closed)  local=tls://192.168.1.159:43193 remote=tls://192.168.1.159:36066>: Stream is closed
-PASSED
-distributed/tests/test_security.py::test_require_encryption PASSED
-distributed/tests/test_security.py::test_temporary_credentials PASSED
-distributed/tests/test_security.py::test_extra_conn_args_in_temporary_credentials PASSED
-distributed/tests/test_security.py::test_tls_temporary_credentials_functional PASSED
-distributed/tests/test_semaphore.py::test_semaphore_trivial 2022-08-26 14:10:23,606 - distributed.utils_perf - WARNING - full garbage collections took 75% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_semaphore.py::test_serializable 2022-08-26 14:10:23,911 - distributed.utils_perf - WARNING - full garbage collections took 75% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_semaphore.py::test_release_simple 2022-08-26 14:10:24,223 - distributed.utils_perf - WARNING - full garbage collections took 75% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_semaphore.py::test_acquires_with_timeout 2022-08-26 14:10:24,503 - distributed.utils_perf - WARNING - full garbage collections took 75% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_semaphore.py::test_timeout_sync 2022-08-26 14:10:25,416 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:10:25,418 - distributed.scheduler - INFO - State start
-2022-08-26 14:10:25,421 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:10:25,421 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:34481
-2022-08-26 14:10:25,421 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:10:25,435 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:44473
-2022-08-26 14:10:25,435 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:44473
-2022-08-26 14:10:25,435 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33815
-2022-08-26 14:10:25,435 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34481
-2022-08-26 14:10:25,435 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:25,435 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:10:25,435 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:10:25,435 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-0ky1qn07
-2022-08-26 14:10:25,435 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:25,477 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:44399
-2022-08-26 14:10:25,477 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:44399
-2022-08-26 14:10:25,477 - distributed.worker - INFO -          dashboard at:            127.0.0.1:38681
-2022-08-26 14:10:25,477 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34481
-2022-08-26 14:10:25,477 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:25,477 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:10:25,477 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:10:25,477 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-zuowlgrt
-2022-08-26 14:10:25,477 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:25,728 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:44473', status: init, memory: 0, processing: 0>
-2022-08-26 14:10:26,004 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:44473
-2022-08-26 14:10:26,004 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:26,005 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34481
-2022-08-26 14:10:26,005 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:26,005 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:44399', status: init, memory: 0, processing: 0>
-2022-08-26 14:10:26,005 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:44399
-2022-08-26 14:10:26,006 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:26,006 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:26,006 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34481
-2022-08-26 14:10:26,006 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:26,007 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:26,011 - distributed.scheduler - INFO - Receive client connection: Client-8163ab78-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:26,012 - distributed.core - INFO - Starting established connection
-PASSED2022-08-26 14:10:26,041 - distributed.scheduler - INFO - Remove client Client-8163ab78-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:26,041 - distributed.scheduler - INFO - Remove client Client-8163ab78-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:26,041 - distributed.scheduler - INFO - Close client connection: Client-8163ab78-2583-11ed-a99d-00d861bc4509
-
-distributed/tests/test_semaphore.py::test_release_semaphore_after_timeout 2022-08-26 14:10:26,779 - distributed.utils_perf - WARNING - full garbage collections took 75% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_semaphore.py::test_async_ctx 2022-08-26 14:10:27,055 - distributed.utils_perf - WARNING - full garbage collections took 75% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_semaphore.py::test_worker_dies SKIPPED (need ...)
-distributed/tests/test_semaphore.py::test_access_semaphore_by_name 2022-08-26 14:10:28,343 - distributed.utils_perf - WARNING - full garbage collections took 74% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_semaphore.py::test_close_async SKIPPED (need ...)
-distributed/tests/test_semaphore.py::test_close_sync 2022-08-26 14:10:29,266 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:10:29,269 - distributed.scheduler - INFO - State start
-2022-08-26 14:10:29,272 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:10:29,272 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:35469
-2022-08-26 14:10:29,272 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:10:29,288 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40013
-2022-08-26 14:10:29,288 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40013
-2022-08-26 14:10:29,288 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43373
-2022-08-26 14:10:29,288 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35469
-2022-08-26 14:10:29,288 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:29,288 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:10:29,288 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:10:29,288 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-08ozixo3
-2022-08-26 14:10:29,288 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:29,331 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33665
-2022-08-26 14:10:29,331 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33665
-2022-08-26 14:10:29,331 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35757
-2022-08-26 14:10:29,331 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35469
-2022-08-26 14:10:29,331 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:29,331 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:10:29,331 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:10:29,331 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-jvyyk9a_
-2022-08-26 14:10:29,331 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:29,585 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40013', status: init, memory: 0, processing: 0>
-2022-08-26 14:10:29,860 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40013
-2022-08-26 14:10:29,860 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:29,860 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35469
-2022-08-26 14:10:29,860 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:29,861 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33665', status: init, memory: 0, processing: 0>
-2022-08-26 14:10:29,861 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33665
-2022-08-26 14:10:29,861 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:29,861 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:29,861 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35469
-2022-08-26 14:10:29,862 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:10:29,862 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:29,867 - distributed.scheduler - INFO - Receive client connection: Client-83b004c6-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:29,868 - distributed.core - INFO - Starting established connection
-2022-08-26 14:10:29,944 - distributed.core - ERROR - Semaphore `semaphore-181cc878fc2942bfb6d2b77d587d176c` not known or already closed.
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/utils.py", line 799, in wrapper
-    return await func(*args, **kwargs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/semaphore.py", line 148, in acquire
-    raise RuntimeError(f"Semaphore `{name}` not known or already closed.")
-RuntimeError: Semaphore `semaphore-181cc878fc2942bfb6d2b77d587d176c` not known or already closed.
-2022-08-26 14:10:29,944 - distributed.core - ERROR - Exception while handling op semaphore_acquire
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 770, in _handle_comm
-    result = await result
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/utils.py", line 799, in wrapper
-    return await func(*args, **kwargs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/semaphore.py", line 148, in acquire
-    raise RuntimeError(f"Semaphore `{name}` not known or already closed.")
-RuntimeError: Semaphore `semaphore-181cc878fc2942bfb6d2b77d587d176c` not known or already closed.
-PASSED2022-08-26 14:10:29,946 - distributed.scheduler - INFO - Remove client Client-83b004c6-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:29,947 - distributed.scheduler - INFO - Remove client Client-83b004c6-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:10:29,947 - distributed.scheduler - INFO - Close client connection: Client-83b004c6-2583-11ed-a99d-00d861bc4509
-
-distributed/tests/test_semaphore.py::test_release_once_too_many 2022-08-26 14:10:30,217 - distributed.utils_perf - WARNING - full garbage collections took 74% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_semaphore.py::test_release_once_too_many_resilience 2022-08-26 14:10:30,564 - distributed.utils_perf - WARNING - full garbage collections took 73% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_semaphore.py::test_retry_acquire 2022-08-26 14:10:30,847 - distributed.utils_perf - WARNING - full garbage collections took 73% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_semaphore.py::test_oversubscribing_leases 2022-08-26 14:10:31,117 - distributed.semaphore - CRITICAL - Refreshing an unknown lease ID 8ecc936fb7a94691b086263af505e279 for semaphore-60814a21d25647cb9b6f63919a27fe1d. This might be due to leases timing out and may cause overbooking of the semaphore!This is often caused by long-running GIL-holding in the task which acquired the lease.
-2022-08-26 14:10:31,277 - distributed.semaphore - CRITICAL - Refreshing an unknown lease ID 16f2b99f17b945b7ace9b5501d2bffcc for semaphore-60814a21d25647cb9b6f63919a27fe1d. This might be due to leases timing out and may cause overbooking of the semaphore!This is often caused by long-running GIL-holding in the task which acquired the lease.
-2022-08-26 14:10:31,599 - distributed.utils_perf - WARNING - full garbage collections took 73% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_semaphore.py::test_timeout_zero 2022-08-26 14:10:31,861 - distributed.utils_perf - WARNING - full garbage collections took 73% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_semaphore.py::test_getvalue 2022-08-26 14:10:32,122 - distributed.utils_perf - WARNING - full garbage collections took 73% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_semaphore.py::test_metrics 2022-08-26 14:10:32,383 - distributed.utils_perf - WARNING - full garbage collections took 73% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_semaphore.py::test_threadpoolworkers_pick_correct_ioloop PASSED
-distributed/tests/test_semaphore.py::test_release_retry 2022-08-26 14:10:34,717 - distributed.utils_perf - WARNING - full garbage collections took 73% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_semaphore.py::test_release_failure 2022-08-26 14:10:34,752 - distributed.semaphore - ERROR - Release failed for id=f1149ecc383d41d8928271efef1d0eba, lease_id=9becc71bb6fc468a97339781c58d9853, name=resource_we_want_to_limit. Cluster network might be unstable?
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/semaphore.py", line 486, in _release
-    await retry_operation(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/utils_comm.py", line 383, in retry_operation
-    return await retry(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/utils_comm.py", line 368, in retry
-    return await coro()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 1154, in send_recv_from_rpc
-    return await send_recv(comm=comm, op=key, **kwargs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 917, in send_recv
-    await comm.write(msg, serializers=serializers, on_error="raise")
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/utils_test.py", line 2036, in write
-    raise OSError()
-OSError
-2022-08-26 14:10:35,174 - distributed.utils_perf - WARNING - full garbage collections took 75% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_sizeof.py::test_safe_sizeof[obj0] PASSED
-distributed/tests/test_sizeof.py::test_safe_sizeof[obj1] PASSED
-distributed/tests/test_sizeof.py::test_safe_sizeof[obj2] PASSED
-distributed/tests/test_sizeof.py::test_safe_sizeof_logs_on_failure 2022-08-26 14:10:35,179 - distributed.sizeof - WARNING - Sizeof calculation failed. Defaulting to 0.95 MiB
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/sizeof.py", line 17, in safe_sizeof
-    return sizeof(obj)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/dask/utils.py", line 637, in __call__
-    return meth(arg, *args, **kwargs)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/dask/sizeof.py", line 17, in sizeof_default
-    return sys.getsizeof(o)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/tests/test_sizeof.py", line 21, in __sizeof__
-    raise ValueError("bar")
-ValueError: bar
-2022-08-26 14:10:35,180 - distributed.sizeof - WARNING - Sizeof calculation failed. Defaulting to 2.00 MiB
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/sizeof.py", line 17, in safe_sizeof
-    return sizeof(obj)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/dask/utils.py", line 637, in __call__
-    return meth(arg, *args, **kwargs)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/dask/sizeof.py", line 17, in sizeof_default
-    return sys.getsizeof(o)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/tests/test_sizeof.py", line 21, in __sizeof__
-    raise ValueError("bar")
-ValueError: bar
-PASSED
-distributed/tests/test_spec.py::test_address_default_none PASSED
-distributed/tests/test_spec.py::test_child_address_persists PASSED
-distributed/tests/test_spill.py::test_psize PASSED
-distributed/tests/test_spill.py::test_spillbuffer PASSED
-distributed/tests/test_spill.py::test_disk_size_calculation PASSED
-distributed/tests/test_spill.py::test_spillbuffer_maxlim 2022-08-26 14:10:35,196 - distributed.spill - WARNING - Spill file on disk reached capacity; keeping data in memory
-2022-08-26 14:10:35,196 - distributed.spill - WARNING - Spill file on disk reached capacity; keeping data in memory
-2022-08-26 14:10:35,197 - distributed.spill - WARNING - Spill file on disk reached capacity; keeping data in memory
-2022-08-26 14:10:35,197 - distributed.spill - WARNING - Spill file on disk reached capacity; keeping data in memory
-PASSED
-distributed/tests/test_spill.py::test_spillbuffer_fail_to_serialize 2022-08-26 14:10:35,200 - distributed.spill - ERROR - Failed to pickle 'b'
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/spill.py", line 283, in __setitem__
-    pickled = self.dump(value)  # type: ignore
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/serialize.py", line 643, in serialize_bytelist
-    header, frames = serialize_and_split(x, **kwargs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/serialize.py", line 444, in serialize_and_split
-    header, frames = serialize(x, serializers, on_error, context)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/serialize.py", line 366, in serialize
-    raise TypeError(msg, str(x)[:10000])
-TypeError: ('Could not serialize object of type Bad', '<test_spill.Bad object at 0x564040d34990>')
-
-During handling of the above exception, another exception occurred:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/spill.py", line 114, in handle_errors
-    yield
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/spill.py", line 188, in __setitem__
-    super().__setitem__(key, value)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/zict/buffer.py", line 116, in __setitem__
-    self.fast[key] = value
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/zict/lru.py", line 99, in __setitem__
-    set_()
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/zict/lru.py", line 96, in set_
-    self.evict()
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/zict/lru.py", line 125, in evict
-    cb(k, v)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/zict/buffer.py", line 81, in fast_to_slow
-    self.slow[key] = value
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/zict/cache.py", line 65, in __setitem__
-    self.data[key] = value
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/spill.py", line 288, in __setitem__
-    raise PickleError(key, e)
-distributed.spill.PickleError: ('b', TypeError('Could not serialize object of type Bad', '<test_spill.Bad object at 0x564040d34990>'))
-PASSED
-distributed/tests/test_spill.py::test_spillbuffer_oserror 2022-08-26 14:10:35,202 - distributed.spill - ERROR - Spill to disk failed; keeping data in memory
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/zict/lru.py", line 103, in __setitem__
-    cb(key, value)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/zict/buffer.py", line 81, in fast_to_slow
-    self.slow[key] = value
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/zict/cache.py", line 65, in __setitem__
-    self.data[key] = value
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/spill.py", line 314, in __setitem__
-    self.d[key] = pickled
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/zict/file.py", line 99, in __setitem__
-    with open(fn, "wb") as fh:
-PermissionError: [Errno 13] Permission denied: '/tmp/pytest-of-matthew/pytest-12/test_spillbuffer_oserror0/c'
-
-During handling of the above exception, another exception occurred:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/spill.py", line 114, in handle_errors
-    yield
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/spill.py", line 188, in __setitem__
-    super().__setitem__(key, value)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/zict/buffer.py", line 116, in __setitem__
-    self.fast[key] = value
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/zict/lru.py", line 106, in __setitem__
-    set_()
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/zict/lru.py", line 96, in set_
-    self.evict()
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/zict/lru.py", line 125, in evict
-    cb(k, v)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/zict/buffer.py", line 81, in fast_to_slow
-    self.slow[key] = value
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/zict/cache.py", line 65, in __setitem__
-    self.data[key] = value
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/spill.py", line 314, in __setitem__
-    self.d[key] = pickled
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/zict/file.py", line 99, in __setitem__
-    with open(fn, "wb") as fh:
-PermissionError: [Errno 13] Permission denied: '/tmp/pytest-of-matthew/pytest-12/test_spillbuffer_oserror0/b'
-2022-08-26 14:10:35,203 - distributed.spill - ERROR - Spill to disk failed; keeping data in memory
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/spill.py", line 114, in handle_errors
-    yield
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/spill.py", line 188, in __setitem__
-    super().__setitem__(key, value)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/zict/buffer.py", line 116, in __setitem__
-    self.fast[key] = value
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/zict/lru.py", line 99, in __setitem__
-    set_()
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/zict/lru.py", line 96, in set_
-    self.evict()
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/zict/lru.py", line 125, in evict
-    cb(k, v)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/zict/buffer.py", line 81, in fast_to_slow
-    self.slow[key] = value
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/zict/cache.py", line 65, in __setitem__
-    self.data[key] = value
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/spill.py", line 314, in __setitem__
-    self.d[key] = pickled
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/zict/file.py", line 99, in __setitem__
-    with open(fn, "wb") as fh:
-PermissionError: [Errno 13] Permission denied: '/tmp/pytest-of-matthew/pytest-12/test_spillbuffer_oserror0/b'
-PASSED
-distributed/tests/test_spill.py::test_spillbuffer_evict 2022-08-26 14:10:35,205 - distributed.spill - ERROR - Failed to pickle 'bad'
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/spill.py", line 283, in __setitem__
-    pickled = self.dump(value)  # type: ignore
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/serialize.py", line 643, in serialize_bytelist
-    header, frames = serialize_and_split(x, **kwargs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/serialize.py", line 444, in serialize_and_split
-    header, frames = serialize(x, serializers, on_error, context)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/serialize.py", line 366, in serialize
-    raise TypeError(msg, str(x)[:10000])
-TypeError: ('Could not serialize object of type Bad', '<test_spill.Bad object at 0x5640426096d0>')
-
-During handling of the above exception, another exception occurred:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/spill.py", line 114, in handle_errors
-    yield
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/spill.py", line 210, in evict
-    _, _, weight = self.fast.evict()
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/zict/lru.py", line 125, in evict
-    cb(k, v)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/zict/buffer.py", line 81, in fast_to_slow
-    self.slow[key] = value
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/zict/cache.py", line 65, in __setitem__
-    self.data[key] = value
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/spill.py", line 288, in __setitem__
-    raise PickleError(key, e)
-distributed.spill.PickleError: ('bad', TypeError('Could not serialize object of type Bad', '<test_spill.Bad object at 0x5640426096d0>'))
-PASSED
-distributed/tests/test_spill.py::test_weakref_cache[60-SupportsWeakRef-True] PASSED
-distributed/tests/test_spill.py::test_weakref_cache[60-NoWeakRef-False] PASSED
-distributed/tests/test_spill.py::test_weakref_cache[110-SupportsWeakRef-True] PASSED
-distributed/tests/test_spill.py::test_weakref_cache[110-NoWeakRef-False] PASSED
-distributed/tests/test_steal.py::test_work_stealing 2022-08-26 14:10:35,773 - distributed.utils_perf - WARNING - full garbage collections took 74% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_steal.py::test_dont_steal_expensive_data_fast_computation 2022-08-26 14:10:36,083 - distributed.utils_perf - WARNING - full garbage collections took 74% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_steal.py::test_steal_cheap_data_slow_computation 2022-08-26 14:10:37,072 - distributed.utils_perf - WARNING - full garbage collections took 74% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_steal.py::test_steal_expensive_data_slow_computation 2022-08-26 14:10:38,703 - distributed.utils_perf - WARNING - full garbage collections took 73% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_steal.py::test_worksteal_many_thieves 2022-08-26 14:10:40,287 - distributed.utils_perf - WARNING - full garbage collections took 71% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_steal.py::test_stop_plugin 2022-08-26 14:10:40,777 - distributed.utils_perf - WARNING - full garbage collections took 70% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_steal.py::test_stop_in_flight 2022-08-26 14:10:41,605 - distributed.utils_perf - WARNING - full garbage collections took 70% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_steal.py::test_allow_tasks_stolen_before_first_completes 2022-08-26 14:10:42,973 - distributed.utils_perf - WARNING - full garbage collections took 71% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_steal.py::test_eventually_steal_unknown_functions 2022-08-26 14:10:43,765 - distributed.utils_perf - WARNING - full garbage collections took 70% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_steal.py::test_steal_related_tasks SKIPPED
-distributed/tests/test_steal.py::test_dont_steal_fast_tasks_compute_time 2022-08-26 14:10:44,252 - distributed.utils_perf - WARNING - full garbage collections took 69% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_steal.py::test_dont_steal_fast_tasks_blocklist 2022-08-26 14:10:45,692 - distributed.utils_perf - WARNING - full garbage collections took 67% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_steal.py::test_new_worker_steals 2022-08-26 14:10:48,638 - distributed.utils_perf - WARNING - full garbage collections took 67% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_steal.py::test_work_steal_no_kwargs 2022-08-26 14:10:50,823 - distributed.utils_perf - WARNING - full garbage collections took 65% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_steal.py::test_dont_steal_worker_restrictions 2022-08-26 14:10:51,431 - distributed.utils_perf - WARNING - full garbage collections took 65% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_steal.py::test_steal_worker_restrictions 2022-08-26 14:10:52,051 - distributed.utils_perf - WARNING - full garbage collections took 69% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_steal.py::test_dont_steal_host_restrictions 2022-08-26 14:10:52,657 - distributed.utils_perf - WARNING - full garbage collections took 68% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_steal.py::test_steal_host_restrictions 2022-08-26 14:10:53,291 - distributed.utils_perf - WARNING - full garbage collections took 68% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_steal.py::test_dont_steal_resource_restrictions 2022-08-26 14:10:53,897 - distributed.utils_perf - WARNING - full garbage collections took 68% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_steal.py::test_steal_resource_restrictions 2022-08-26 14:10:54,695 - distributed.utils_perf - WARNING - full garbage collections took 68% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_steal.py::test_steal_resource_restrictions_asym_diff 2022-08-26 14:10:55,493 - distributed.utils_perf - WARNING - full garbage collections took 67% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_steal.py::test_balance_without_dependencies 2022-08-26 14:10:56,909 - distributed.utils_perf - WARNING - full garbage collections took 67% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_steal.py::test_dont_steal_executing_tasks 2022-08-26 14:10:57,292 - distributed.utils_perf - WARNING - full garbage collections took 67% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_steal.py::test_dont_steal_executing_tasks_2 2022-08-26 14:10:58,054 - distributed.utils_perf - WARNING - full garbage collections took 67% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_steal.py::test_dont_steal_few_saturated_tasks_many_workers 2022-08-26 14:10:58,796 - distributed.utils_perf - WARNING - full garbage collections took 67% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_steal.py::test_steal_when_more_tasks 2022-08-26 14:10:59,502 - distributed.utils_perf - WARNING - full garbage collections took 65% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_steal.py::test_steal_more_attractive_tasks 2022-08-26 14:11:00,033 - distributed.utils_perf - WARNING - full garbage collections took 65% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_steal.py::test_balance[inp0-expected0] SKIPPED
-distributed/tests/test_steal.py::test_balance[inp1-expected1] SKIPPED
-distributed/tests/test_steal.py::test_balance[inp2-expected2] SKIPPED
-distributed/tests/test_steal.py::test_balance[inp3-expected3] SKIPPED
-distributed/tests/test_steal.py::test_balance[inp4-expected4] SKIPPED
-distributed/tests/test_steal.py::test_balance[inp5-expected5] SKIPPED
-distributed/tests/test_steal.py::test_balance[inp6-expected6] SKIPPED
-distributed/tests/test_steal.py::test_balance[inp7-expected7] SKIPPED
-distributed/tests/test_steal.py::test_balance[inp8-expected8] SKIPPED
-distributed/tests/test_steal.py::test_balance[inp9-expected9] SKIPPED
-distributed/tests/test_steal.py::test_balance[inp10-expected10] SKIPPED
-distributed/tests/test_steal.py::test_balance[inp11-expected11] SKIPPED
-distributed/tests/test_steal.py::test_balance[inp12-expected12] SKIPPED
-distributed/tests/test_steal.py::test_balance[inp13-expected13] SKIPPED
-distributed/tests/test_steal.py::test_restart 2022-08-26 14:11:00,780 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:41323
-2022-08-26 14:11:00,780 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:41323
-2022-08-26 14:11:00,780 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:11:00,780 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41507
-2022-08-26 14:11:00,780 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:45435
-2022-08-26 14:11:00,780 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:00,780 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:11:00,780 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:11:00,780 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-z2_aqhvd
-2022-08-26 14:11:00,780 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:00,782 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38575
-2022-08-26 14:11:00,782 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38575
-2022-08-26 14:11:00,782 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:11:00,782 - distributed.worker - INFO -          dashboard at:            127.0.0.1:38643
-2022-08-26 14:11:00,782 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:45435
-2022-08-26 14:11:00,782 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:00,782 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:11:00,782 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:11:00,782 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-u80q_2oa
-2022-08-26 14:11:00,782 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:01,063 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:45435
-2022-08-26 14:11:01,063 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:01,064 - distributed.core - INFO - Starting established connection
-2022-08-26 14:11:01,080 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:45435
-2022-08-26 14:11:01,081 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:01,081 - distributed.core - INFO - Starting established connection
-2022-08-26 14:11:01,341 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:38575
-2022-08-26 14:11:01,341 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:41323
-2022-08-26 14:11:01,342 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-bb431b8d-22bd-4bb2-8438-5f0a4a1de3c5 Address tcp://127.0.0.1:38575 Status: Status.closing
-2022-08-26 14:11:01,342 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-edac7097-8567-4e23-84fd-90cbe1888586 Address tcp://127.0.0.1:41323 Status: Status.closing
-2022-08-26 14:11:01,514 - distributed.nanny - WARNING - Restarting worker
-2022-08-26 14:11:01,517 - distributed.nanny - WARNING - Restarting worker
-2022-08-26 14:11:02,235 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36121
-2022-08-26 14:11:02,235 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36121
-2022-08-26 14:11:02,235 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:11:02,235 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46865
-2022-08-26 14:11:02,235 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:45435
-2022-08-26 14:11:02,235 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:02,236 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:11:02,236 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:11:02,236 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-e_bu75jh
-2022-08-26 14:11:02,236 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:02,248 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42055
-2022-08-26 14:11:02,248 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42055
-2022-08-26 14:11:02,248 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:11:02,248 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40885
-2022-08-26 14:11:02,248 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:45435
-2022-08-26 14:11:02,248 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:02,248 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:11:02,248 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:11:02,248 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-0ra_70o1
-2022-08-26 14:11:02,248 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:02,532 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:45435
-2022-08-26 14:11:02,532 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:02,532 - distributed.core - INFO - Starting established connection
-2022-08-26 14:11:02,536 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:45435
-2022-08-26 14:11:02,536 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:02,537 - distributed.core - INFO - Starting established connection
-2022-08-26 14:11:02,726 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42055
-2022-08-26 14:11:02,727 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36121
-2022-08-26 14:11:02,727 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-5c02cfe0-c5ba-451b-a363-aeab7af897f6 Address tcp://127.0.0.1:42055 Status: Status.closing
-2022-08-26 14:11:02,727 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-710de619-51d1-4087-abd8-add5b1dc148f Address tcp://127.0.0.1:36121 Status: Status.closing
-2022-08-26 14:11:03,076 - distributed.utils_perf - WARNING - full garbage collections took 64% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_steal.py::test_steal_communication_heavy_tasks 2022-08-26 14:11:04,464 - distributed.utils_perf - WARNING - full garbage collections took 64% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_steal.py::test_steal_twice 2022-08-26 14:11:05,845 - distributed.utils_perf - WARNING - full garbage collections took 61% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_steal.py::test_paused_workers_must_not_steal 2022-08-26 14:11:06,756 - distributed.utils_perf - WARNING - full garbage collections took 61% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_steal.py::test_dont_steal_already_released 2022-08-26 14:11:07,165 - distributed.utils_perf - WARNING - full garbage collections took 61% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_steal.py::test_dont_steal_long_running_tasks 2022-08-26 14:11:08,173 - distributed.utils_perf - WARNING - full garbage collections took 61% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_steal.py::test_cleanup_repeated_tasks 2022-08-26 14:11:08,944 - distributed.utils_perf - WARNING - full garbage collections took 60% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_steal.py::test_lose_task 2022-08-26 14:11:10,395 - distributed.utils_perf - WARNING - full garbage collections took 57% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_steal.py::test_parse_stealing_interval[None-100] 2022-08-26 14:11:10,628 - distributed.utils_perf - WARNING - full garbage collections took 59% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_steal.py::test_parse_stealing_interval[500ms-500] 2022-08-26 14:11:10,851 - distributed.utils_perf - WARNING - full garbage collections took 59% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_steal.py::test_parse_stealing_interval[2-2] 2022-08-26 14:11:11,074 - distributed.utils_perf - WARNING - full garbage collections took 59% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_steal.py::test_balance_with_longer_task 2022-08-26 14:11:16,350 - distributed.utils_perf - WARNING - full garbage collections took 59% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_steal.py::test_blocklist_shuffle_split 2022-08-26 14:11:16,978 - distributed.utils_perf - WARNING - full garbage collections took 58% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_steal.py::test_steal_concurrent_simple 2022-08-26 14:11:17,458 - distributed.utils_perf - WARNING - full garbage collections took 59% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_steal.py::test_steal_reschedule_reset_in_flight_occupancy 2022-08-26 14:11:17,824 - distributed.utils_perf - WARNING - full garbage collections took 60% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_steal.py::test_get_story 2022-08-26 14:11:18,930 - distributed.utils_perf - WARNING - full garbage collections took 60% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_steal.py::test_steal_worker_dies_same_ip 2022-08-26 14:11:19,448 - distributed.utils_perf - WARNING - full garbage collections took 61% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_steal.py::test_reschedule_concurrent_requests_deadlock 2022-08-26 14:11:19,767 - distributed.utils_perf - WARNING - full garbage collections took 61% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_steal.py::test_correct_bad_time_estimate 2022-08-26 14:11:21,087 - distributed.utils_perf - WARNING - full garbage collections took 61% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_steal.py::test_steal_stimulus_id_unique 2022-08-26 14:11:22,021 - distributed.utils_perf - WARNING - full garbage collections took 58% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_steal.py::test_steal_worker_state[executing] PASSED
-distributed/tests/test_steal.py::test_steal_worker_state[long-running] PASSED
-distributed/tests/test_stories.py::test_scheduler_story_stimulus_success 2022-08-26 14:11:22,315 - distributed.utils_perf - WARNING - full garbage collections took 58% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_stories.py::test_scheduler_story_stimulus_retry 2022-08-26 14:11:22,358 - distributed.worker - WARNING - Compute Failed
-Key:       task-0825f2e9d9b3d4ec945e657636f9c7f3
-Function:  task
-args:      ()
-kwargs:    {}
-Exception: 'AssertionError("assert False\\n +  where False = <function get at 0x564036cfd200>(\'foo\')\\n +    where <function get at 0x564036cfd200> = <module \'dask.config\' from \'/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/dask/config.py\'>.get\\n +      where <module \'dask.config\' from \'/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/dask/config.py\'> = dask.config")'
-
-2022-08-26 14:11:22,588 - distributed.utils_perf - WARNING - full garbage collections took 59% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_stories.py::test_client_story 2022-08-26 14:11:22,858 - distributed.utils_perf - WARNING - full garbage collections took 59% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_stories.py::test_client_story_failed_worker[ignore] 2022-08-26 14:11:22,907 - distributed.scheduler - ERROR - broadcast to tcp://127.0.0.1:35409 failed: CommClosedError: in <TCP (closed) Scheduler Broadcast local=tcp://127.0.0.1:44560 remote=tcp://127.0.0.1:35409>: Stream is closed
-2022-08-26 14:11:22,907 - distributed.scheduler - ERROR - broadcast to tcp://127.0.0.1:36983 failed: CommClosedError: in <TCP (closed) Scheduler Broadcast local=tcp://127.0.0.1:45482 remote=tcp://127.0.0.1:36983>: Stream is closed
-2022-08-26 14:11:23,134 - distributed.utils_perf - WARNING - full garbage collections took 59% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_stories.py::test_client_story_failed_worker[raise] 2022-08-26 14:11:23,183 - distributed.scheduler - ERROR - broadcast to tcp://127.0.0.1:38119 failed: CommClosedError: in <TCP (closed) Scheduler Broadcast local=tcp://127.0.0.1:40088 remote=tcp://127.0.0.1:38119>: Stream is closed
-2022-08-26 14:11:23,183 - distributed.scheduler - ERROR - broadcast to tcp://127.0.0.1:39209 failed: CommClosedError: in <TCP (closed) Scheduler Broadcast local=tcp://127.0.0.1:36802 remote=tcp://127.0.0.1:39209>: Stream is closed
-2022-08-26 14:11:23,410 - distributed.utils_perf - WARNING - full garbage collections took 60% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_stories.py::test_worker_story_with_deps 2022-08-26 14:11:23,697 - distributed.utils_perf - WARNING - full garbage collections took 60% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_stress.py::test_stress_1 2022-08-26 14:11:24,616 - distributed.utils_perf - WARNING - full garbage collections took 59% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_stress.py::test_stress_gc[slowinc-100] SKIPPED
-distributed/tests/test_stress.py::test_stress_gc[inc-1000] SKIPPED (...)
-distributed/tests/test_stress.py::test_cancel_stress 2022-08-26 14:11:28,591 - distributed.utils_perf - WARNING - full garbage collections took 45% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_stress.py::test_cancel_stress_sync 2022-08-26 14:11:29,543 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:11:29,546 - distributed.scheduler - INFO - State start
-2022-08-26 14:11:29,549 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:11:29,549 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:45961
-2022-08-26 14:11:29,549 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:11:29,570 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38419
-2022-08-26 14:11:29,570 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38419
-2022-08-26 14:11:29,570 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36671
-2022-08-26 14:11:29,570 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:45961
-2022-08-26 14:11:29,570 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:29,570 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:11:29,570 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:11:29,570 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-c7k_y11x
-2022-08-26 14:11:29,570 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:29,620 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:46537
-2022-08-26 14:11:29,620 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:46537
-2022-08-26 14:11:29,620 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41635
-2022-08-26 14:11:29,620 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:45961
-2022-08-26 14:11:29,620 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:29,620 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:11:29,620 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:11:29,620 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-rm91db4a
-2022-08-26 14:11:29,620 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:29,869 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38419', status: init, memory: 0, processing: 0>
-2022-08-26 14:11:30,144 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38419
-2022-08-26 14:11:30,145 - distributed.core - INFO - Starting established connection
-2022-08-26 14:11:30,145 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:45961
-2022-08-26 14:11:30,145 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:30,145 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:46537', status: init, memory: 0, processing: 0>
-2022-08-26 14:11:30,146 - distributed.core - INFO - Starting established connection
-2022-08-26 14:11:30,146 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:46537
-2022-08-26 14:11:30,146 - distributed.core - INFO - Starting established connection
-2022-08-26 14:11:30,146 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:45961
-2022-08-26 14:11:30,146 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:30,147 - distributed.core - INFO - Starting established connection
-2022-08-26 14:11:30,152 - distributed.scheduler - INFO - Receive client connection: Client-a79eb3bf-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:11:30,152 - distributed.core - INFO - Starting established connection
-2022-08-26 14:11:30,906 - distributed.scheduler - INFO - Client Client-a79eb3bf-2583-11ed-a99d-00d861bc4509 requests to cancel 1 keys
-2022-08-26 14:11:30,907 - distributed.scheduler - INFO - Scheduler cancels key finalize-80253a2a4cc81a804986d480e5424edd.  Force=False
-2022-08-26 14:11:31,409 - distributed.scheduler - INFO - Client Client-a79eb3bf-2583-11ed-a99d-00d861bc4509 requests to cancel 1 keys
-2022-08-26 14:11:31,410 - distributed.scheduler - INFO - Scheduler cancels key finalize-80253a2a4cc81a804986d480e5424edd.  Force=False
-2022-08-26 14:11:32,536 - distributed.scheduler - INFO - Client Client-a79eb3bf-2583-11ed-a99d-00d861bc4509 requests to cancel 1 keys
-2022-08-26 14:11:32,537 - distributed.scheduler - INFO - Scheduler cancels key finalize-80253a2a4cc81a804986d480e5424edd.  Force=False
-2022-08-26 14:11:33,549 - distributed.scheduler - INFO - Client Client-a79eb3bf-2583-11ed-a99d-00d861bc4509 requests to cancel 1 keys
-2022-08-26 14:11:33,549 - distributed.scheduler - INFO - Scheduler cancels key finalize-80253a2a4cc81a804986d480e5424edd.  Force=False
-2022-08-26 14:11:34,409 - distributed.scheduler - INFO - Client Client-a79eb3bf-2583-11ed-a99d-00d861bc4509 requests to cancel 1 keys
-2022-08-26 14:11:34,409 - distributed.scheduler - INFO - Scheduler cancels key finalize-80253a2a4cc81a804986d480e5424edd.  Force=False
-2022-08-26 14:11:34,451 - distributed.scheduler - INFO - Remove client Client-a79eb3bf-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:11:34,461 - distributed.scheduler - INFO - Remove client Client-a79eb3bf-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:11:34,463 - distributed.scheduler - INFO - Close client connection: Client-a79eb3bf-2583-11ed-a99d-00d861bc4509
-PASSED
-distributed/tests/test_stress.py::test_stress_creation_and_deletion SKIPPED
-distributed/tests/test_stress.py::test_stress_scatter_death 2022-08-26 14:11:35,783 - distributed.utils_perf - WARNING - full garbage collections took 42% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_stress.py::test_stress_communication SKIPPED
-distributed/tests/test_stress.py::test_stress_steal SKIPPED (uncondi...)
-distributed/tests/test_stress.py::test_close_connections SKIPPED (ne...)
-distributed/tests/test_stress.py::test_no_delay_during_large_transfer SKIPPED
-distributed/tests/test_stress.py::test_chaos_rechunk SKIPPED (need -...)
-distributed/tests/test_system.py::test_memory_limit PASSED
-distributed/tests/test_system.py::test_memory_limit_cgroups PASSED
-distributed/tests/test_system.py::test_rlimit PASSED
-distributed/tests/test_system_monitor.py::test_SystemMonitor PASSED
-distributed/tests/test_system_monitor.py::test_count PASSED
-distributed/tests/test_system_monitor.py::test_range_query PASSED
-distributed/tests/test_system_monitor.py::test_disk_config PASSED
-distributed/tests/test_threadpoolexecutor.py::test_tpe PASSED
-distributed/tests/test_threadpoolexecutor.py::test_shutdown_timeout PASSED
-distributed/tests/test_threadpoolexecutor.py::test_shutdown_timeout_raises PASSED
-distributed/tests/test_threadpoolexecutor.py::test_shutdown_wait PASSED
-distributed/tests/test_threadpoolexecutor.py::test_secede_rejoin_busy PASSED
-distributed/tests/test_threadpoolexecutor.py::test_secede_rejoin_quiet PASSED
-distributed/tests/test_threadpoolexecutor.py::test_rejoin_idempotent PASSED
-distributed/tests/test_threadpoolexecutor.py::test_thread_name PASSED
-distributed/tests/test_tls_functional.py::test_basic 2022-08-26 14:11:38,083 - distributed.utils_perf - WARNING - full garbage collections took 42% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_tls_functional.py::test_Queue 2022-08-26 14:11:38,362 - distributed.utils_perf - WARNING - full garbage collections took 43% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_tls_functional.py::test_client_submit 2022-08-26 14:11:38,760 - distributed.utils_perf - WARNING - full garbage collections took 43% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_tls_functional.py::test_gather 2022-08-26 14:11:39,062 - distributed.utils_perf - WARNING - full garbage collections took 44% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_tls_functional.py::test_scatter 2022-08-26 14:11:39,340 - distributed.utils_perf - WARNING - full garbage collections took 44% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_tls_functional.py::test_nanny 2022-08-26 14:11:40,090 - distributed.worker - INFO -       Start worker at:      tls://127.0.0.1:41027
-2022-08-26 14:11:40,090 - distributed.worker - INFO -          Listening to:      tls://127.0.0.1:41027
-2022-08-26 14:11:40,090 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:11:40,090 - distributed.worker - INFO -          dashboard at:            127.0.0.1:37583
-2022-08-26 14:11:40,090 - distributed.worker - INFO - Waiting to connect to:      tls://127.0.0.1:33723
-2022-08-26 14:11:40,090 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:40,090 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:11:40,090 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:11:40,090 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ksy_kev4
-2022-08-26 14:11:40,091 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:40,097 - distributed.worker - INFO -       Start worker at:      tls://127.0.0.1:45639
-2022-08-26 14:11:40,098 - distributed.worker - INFO -          Listening to:      tls://127.0.0.1:45639
-2022-08-26 14:11:40,098 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:11:40,098 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36459
-2022-08-26 14:11:40,098 - distributed.worker - INFO - Waiting to connect to:      tls://127.0.0.1:33723
-2022-08-26 14:11:40,098 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:40,098 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:11:40,098 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:11:40,098 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ky5afpif
-2022-08-26 14:11:40,098 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:40,385 - distributed.worker - INFO -         Registered to:      tls://127.0.0.1:33723
-2022-08-26 14:11:40,385 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:40,386 - distributed.core - INFO - Starting established connection
-2022-08-26 14:11:40,395 - distributed.worker - INFO -         Registered to:      tls://127.0.0.1:33723
-2022-08-26 14:11:40,395 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:40,396 - distributed.core - INFO - Starting established connection
-2022-08-26 14:11:40,626 - distributed.worker - INFO - Stopping worker at tls://127.0.0.1:45639
-2022-08-26 14:11:40,627 - distributed.worker - INFO - Stopping worker at tls://127.0.0.1:41027
-2022-08-26 14:11:40,627 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-7f1532da-9ac0-4f8d-ad06-5bbb2fa617af Address tls://127.0.0.1:45639 Status: Status.closing
-2022-08-26 14:11:40,627 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-798ff9ac-d960-4972-8099-ee97493986c0 Address tls://127.0.0.1:41027 Status: Status.closing
-2022-08-26 14:11:41,029 - distributed.utils_perf - WARNING - full garbage collections took 44% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_tls_functional.py::test_rebalance 2022-08-26 14:11:41,344 - distributed.utils_perf - WARNING - full garbage collections took 44% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_tls_functional.py::test_work_stealing 2022-08-26 14:11:42,973 - distributed.utils_perf - WARNING - full garbage collections took 44% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_tls_functional.py::test_worker_client 2022-08-26 14:11:43,296 - distributed.utils_perf - WARNING - full garbage collections took 46% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_tls_functional.py::test_worker_client_gather 2022-08-26 14:11:43,608 - distributed.utils_perf - WARNING - full garbage collections took 46% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_tls_functional.py::test_worker_client_executor 2022-08-26 14:11:43,984 - distributed.utils_perf - WARNING - full garbage collections took 46% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_tls_functional.py::test_retire_workers 2022-08-26 14:11:44,740 - distributed.worker - INFO -       Start worker at:      tls://127.0.0.1:33839
-2022-08-26 14:11:44,740 - distributed.worker - INFO -          Listening to:      tls://127.0.0.1:33839
-2022-08-26 14:11:44,740 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:11:44,740 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46695
-2022-08-26 14:11:44,740 - distributed.worker - INFO - Waiting to connect to:      tls://127.0.0.1:35367
-2022-08-26 14:11:44,740 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:44,740 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:11:44,740 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:11:44,740 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-qnslid73
-2022-08-26 14:11:44,740 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:44,744 - distributed.worker - INFO -       Start worker at:      tls://127.0.0.1:44301
-2022-08-26 14:11:44,744 - distributed.worker - INFO -          Listening to:      tls://127.0.0.1:44301
-2022-08-26 14:11:44,744 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:11:44,744 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42581
-2022-08-26 14:11:44,744 - distributed.worker - INFO - Waiting to connect to:      tls://127.0.0.1:35367
-2022-08-26 14:11:44,744 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:44,744 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:11:44,744 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:11:44,744 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-9fqb3x17
-2022-08-26 14:11:44,744 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:45,036 - distributed.worker - INFO -         Registered to:      tls://127.0.0.1:35367
-2022-08-26 14:11:45,036 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:45,036 - distributed.core - INFO - Starting established connection
-2022-08-26 14:11:45,049 - distributed.worker - INFO -         Registered to:      tls://127.0.0.1:35367
-2022-08-26 14:11:45,049 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:45,050 - distributed.core - INFO - Starting established connection
-2022-08-26 14:11:45,109 - distributed.worker - INFO - Stopping worker at tls://127.0.0.1:44301
-2022-08-26 14:11:45,114 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-4f16c517-410f-4c98-9822-6a28f4213dc2 Address tls://127.0.0.1:44301 Status: Status.closing
-2022-08-26 14:11:45,115 - distributed.nanny - INFO - Worker closed
-2022-08-26 14:11:45,115 - distributed.nanny - ERROR - Worker process died unexpectedly
-2022-08-26 14:11:45,251 - distributed.worker - INFO - Stopping worker at tls://127.0.0.1:33839
-2022-08-26 14:11:45,252 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-3dd9f2d1-7d18-460b-a4f7-419a99a802b5 Address tls://127.0.0.1:33839 Status: Status.closing
-2022-08-26 14:11:45,597 - distributed.utils_perf - WARNING - full garbage collections took 46% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_tls_functional.py::test_security_dict_input_no_security PASSED
-distributed/tests/test_tls_functional.py::test_security_dict_input PASSED
-distributed/tests/test_utils.py::test_All PASSED
-distributed/tests/test_utils.py::test_sync_error PASSED
-distributed/tests/test_utils.py::test_sync_timeout PASSED
-distributed/tests/test_utils.py::test_sync_closed_loop PASSED
-distributed/tests/test_utils.py::test_is_kernel PASSED
-distributed/tests/test_utils.py::test_ensure_ip PASSED
-distributed/tests/test_utils.py::test_get_ip_interface PASSED
-distributed/tests/test_utils.py::test_get_mp_context PASSED
-distributed/tests/test_utils.py::test_truncate_exception PASSED
-distributed/tests/test_utils.py::test_get_traceback PASSED
-distributed/tests/test_utils.py::test_maybe_complex PASSED
-distributed/tests/test_utils.py::test_read_block PASSED
-distributed/tests/test_utils.py::test_seek_delimiter_endline PASSED
-distributed/tests/test_utils.py::test_ensure_memoryview[] PASSED
-distributed/tests/test_utils.py::test_ensure_memoryview[data1] PASSED
-distributed/tests/test_utils.py::test_ensure_memoryview[1] PASSED
-distributed/tests/test_utils.py::test_ensure_memoryview[data3] PASSED
-distributed/tests/test_utils.py::test_ensure_memoryview[data4] PASSED
-distributed/tests/test_utils.py::test_ensure_memoryview[data5] PASSED
-distributed/tests/test_utils.py::test_ensure_memoryview[data6] PASSED
-distributed/tests/test_utils.py::test_ensure_memoryview[data7] PASSED
-distributed/tests/test_utils.py::test_ensure_memoryview[data8] PASSED
-distributed/tests/test_utils.py::test_ensure_memoryview[data9] PASSED
-distributed/tests/test_utils.py::test_ensure_memoryview[data10] PASSED
-distributed/tests/test_utils.py::test_ensure_memoryview[data11] PASSED
-distributed/tests/test_utils.py::test_ensure_memoryview[data12] PASSED
-distributed/tests/test_utils.py::test_ensure_memoryview[data13] PASSED
-distributed/tests/test_utils.py::test_ensure_memoryview[data14] PASSED
-distributed/tests/test_utils.py::test_ensure_memoryview_ndarray[i8-12-shape0-strides0] PASSED
-distributed/tests/test_utils.py::test_ensure_memoryview_ndarray[i8-12-shape1-strides1] PASSED
-distributed/tests/test_utils.py::test_ensure_memoryview_ndarray[i8-12-shape2-strides2] PASSED
-distributed/tests/test_utils.py::test_ensure_memoryview_ndarray[i8-12-shape3-strides3] PASSED
-distributed/tests/test_utils.py::test_ensure_memoryview_ndarray[i8-12-shape4-strides4] PASSED
-distributed/tests/test_utils.py::test_ensure_memoryview_pyarrow_buffer PASSED
-distributed/tests/test_utils.py::test_nbytes PASSED
-distributed/tests/test_utils.py::test_open_port PASSED
-distributed/tests/test_utils.py::test_set_thread_state PASSED
-distributed/tests/test_utils.py::test_loop_runner FAILED
-distributed/tests/test_utils.py::test_two_loop_runners FAILED
-distributed/tests/test_utils.py::test_loop_runner_gen PASSED
-distributed/tests/test_utils.py::test_all_quiet_exceptions PASSED
-distributed/tests/test_utils.py::test_warn_on_duration PASSED
-distributed/tests/test_utils.py::test_logs PASSED
-distributed/tests/test_utils.py::test_is_valid_xml PASSED
-distributed/tests/test_utils.py::test_format_dashboard_link PASSED
-distributed/tests/test_utils.py::test_parse_ports PASSED
-distributed/tests/test_utils.py::test_offload PASSED
-distributed/tests/test_utils.py::test_offload_preserves_contextvars PASSED
-distributed/tests/test_utils.py::test_serialize_for_cli_deprecated PASSED
-distributed/tests/test_utils.py::test_deserialize_for_cli_deprecated PASSED
-distributed/tests/test_utils.py::test_parse_bytes_deprecated PASSED
-distributed/tests/test_utils.py::test_format_bytes_deprecated PASSED
-distributed/tests/test_utils.py::test_format_time_deprecated PASSED
-distributed/tests/test_utils.py::test_funcname_deprecated PASSED
-distributed/tests/test_utils.py::test_parse_timedelta_deprecated PASSED
-distributed/tests/test_utils.py::test_typename_deprecated PASSED
-distributed/tests/test_utils.py::test_tmpfile_deprecated PASSED
-distributed/tests/test_utils.py::test_iscoroutinefunction_unhashable_input PASSED
-distributed/tests/test_utils.py::test_iscoroutinefunction_nested_partial PASSED
-distributed/tests/test_utils.py::test_recursive_to_dict PASSED
-distributed/tests/test_utils.py::test_recursive_to_dict_no_nest PASSED
-distributed/tests/test_utils.py::test_log_errors 2022-08-26 14:11:46,273 - distributed.utils - ERROR - err7
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/tests/test_utils.py", line 983, in test_log_errors
-    raise CustomError("err7")
-test_utils.test_log_errors.<locals>.CustomError: err7
-PASSED
-distributed/tests/test_utils_comm.py::test_pack_data PASSED
-distributed/tests/test_utils_comm.py::test_subs_multiple PASSED
-distributed/tests/test_utils_comm.py::test_gather_from_workers_permissive 2022-08-26 14:11:46,311 - distributed.comm.tcp - WARNING - Closing dangling stream in <TCP  local=tcp://127.0.0.1:60252 remote=tcp://127.0.0.1:39347>
-2022-08-26 14:11:46,311 - distributed.comm.tcp - WARNING - Closing dangling stream in <TCP  local=tcp://127.0.0.1:45028 remote=tcp://127.0.0.1:40537>
-2022-08-26 14:11:46,544 - distributed.utils_perf - WARNING - full garbage collections took 45% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_utils_comm.py::test_gather_from_workers_permissive_flaky 2022-08-26 14:11:46,812 - distributed.utils_perf - WARNING - full garbage collections took 46% CPU time recently (threshold: 10%)
-PASSED
-distributed/tests/test_utils_comm.py::test_retry_no_exception PASSED
-distributed/tests/test_utils_comm.py::test_retry0_raises_immediately PASSED
-distributed/tests/test_utils_comm.py::test_retry_does_retry_and_sleep PASSED
-distributed/tests/test_utils_perf.py::test_fractional_timer PASSED
-distributed/tests/test_utils_perf.py::test_gc_diagnosis_cpu_time SKIPPED
-distributed/tests/test_utils_perf.py::test_gc_diagnosis_rss_win XFAIL
-distributed/tests/test_utils_test.py::test_bare_cluster 2022-08-26 14:11:48,936 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:11:48,940 - distributed.scheduler - INFO - State start
-2022-08-26 14:11:48,945 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:11:48,945 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:44993
-2022-08-26 14:11:48,945 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:11:48,959 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34971
-2022-08-26 14:11:48,959 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34971
-2022-08-26 14:11:48,959 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34237
-2022-08-26 14:11:48,959 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42823
-2022-08-26 14:11:48,959 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44993
-2022-08-26 14:11:48,959 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:48,959 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:11:48,959 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42823
-2022-08-26 14:11:48,959 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:11:48,959 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33411
-2022-08-26 14:11:48,959 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-l4d26tu4
-2022-08-26 14:11:48,959 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36047
-2022-08-26 14:11:48,959 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44993
-2022-08-26 14:11:48,959 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:48,959 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:48,959 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36047
-2022-08-26 14:11:48,959 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:11:48,959 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:11:48,959 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37325
-2022-08-26 14:11:48,959 - distributed.worker - INFO -          dashboard at:            127.0.0.1:38139
-2022-08-26 14:11:48,959 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-kt925d95
-2022-08-26 14:11:48,959 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37325
-2022-08-26 14:11:48,959 - distributed.worker - INFO -          dashboard at:            127.0.0.1:38513
-2022-08-26 14:11:48,959 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:48,959 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44993
-2022-08-26 14:11:48,960 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:48,960 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:11:48,959 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44993
-2022-08-26 14:11:48,960 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:11:48,960 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-n_cunuz0
-2022-08-26 14:11:48,960 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:48,960 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:48,960 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:11:48,960 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:11:48,960 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-q3ylfgu8
-2022-08-26 14:11:48,960 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:49,016 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39419
-2022-08-26 14:11:49,016 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39419
-2022-08-26 14:11:49,016 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35295
-2022-08-26 14:11:49,016 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44993
-2022-08-26 14:11:49,017 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:49,017 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:11:49,017 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:11:49,017 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-6k9xvb3j
-2022-08-26 14:11:49,017 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:49,071 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:44203
-2022-08-26 14:11:49,071 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:44203
-2022-08-26 14:11:49,071 - distributed.worker - INFO -          dashboard at:            127.0.0.1:44997
-2022-08-26 14:11:49,071 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44993
-2022-08-26 14:11:49,071 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:49,071 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:11:49,071 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:11:49,071 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-jt7nc7pt
-2022-08-26 14:11:49,071 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:49,215 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36083
-2022-08-26 14:11:49,215 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36083
-2022-08-26 14:11:49,215 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45129
-2022-08-26 14:11:49,215 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44993
-2022-08-26 14:11:49,215 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:49,215 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:11:49,215 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:11:49,215 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-lpany5fh
-2022-08-26 14:11:49,216 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:49,242 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38057
-2022-08-26 14:11:49,242 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38057
-2022-08-26 14:11:49,243 - distributed.worker - INFO -          dashboard at:            127.0.0.1:32927
-2022-08-26 14:11:49,243 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44993
-2022-08-26 14:11:49,243 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:49,243 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:11:49,243 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:11:49,243 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-w8mq9kxz
-2022-08-26 14:11:49,243 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:49,338 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33095
-2022-08-26 14:11:49,338 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33095
-2022-08-26 14:11:49,338 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42397
-2022-08-26 14:11:49,338 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44993
-2022-08-26 14:11:49,338 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:49,338 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:11:49,338 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:11:49,338 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-u_748nu8
-2022-08-26 14:11:49,338 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:49,338 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33055
-2022-08-26 14:11:49,339 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33055
-2022-08-26 14:11:49,339 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39481
-2022-08-26 14:11:49,339 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44993
-2022-08-26 14:11:49,339 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:49,339 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:11:49,339 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:11:49,339 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-kiophbfc
-2022-08-26 14:11:49,340 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:49,380 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42823', status: init, memory: 0, processing: 0>
-2022-08-26 14:11:49,687 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42823
-2022-08-26 14:11:49,688 - distributed.core - INFO - Starting established connection
-2022-08-26 14:11:49,688 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44993
-2022-08-26 14:11:49,688 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:49,689 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37325', status: init, memory: 0, processing: 0>
-2022-08-26 14:11:49,689 - distributed.core - INFO - Starting established connection
-2022-08-26 14:11:49,689 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37325
-2022-08-26 14:11:49,689 - distributed.core - INFO - Starting established connection
-2022-08-26 14:11:49,690 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36047', status: init, memory: 0, processing: 0>
-2022-08-26 14:11:49,689 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44993
-2022-08-26 14:11:49,690 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:49,690 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36047
-2022-08-26 14:11:49,690 - distributed.core - INFO - Starting established connection
-2022-08-26 14:11:49,690 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44993
-2022-08-26 14:11:49,691 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:49,690 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:34971', status: init, memory: 0, processing: 0>
-2022-08-26 14:11:49,691 - distributed.core - INFO - Starting established connection
-2022-08-26 14:11:49,691 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:34971
-2022-08-26 14:11:49,691 - distributed.core - INFO - Starting established connection
-2022-08-26 14:11:49,691 - distributed.core - INFO - Starting established connection
-2022-08-26 14:11:49,691 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39419', status: init, memory: 0, processing: 0>
-2022-08-26 14:11:49,691 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44993
-2022-08-26 14:11:49,691 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:49,692 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39419
-2022-08-26 14:11:49,692 - distributed.core - INFO - Starting established connection
-2022-08-26 14:11:49,692 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:44203', status: init, memory: 0, processing: 0>
-2022-08-26 14:11:49,692 - distributed.core - INFO - Starting established connection
-2022-08-26 14:11:49,692 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44993
-2022-08-26 14:11:49,692 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:49,692 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:44203
-2022-08-26 14:11:49,692 - distributed.core - INFO - Starting established connection
-2022-08-26 14:11:49,693 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36083', status: init, memory: 0, processing: 0>
-2022-08-26 14:11:49,693 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44993
-2022-08-26 14:11:49,693 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:49,693 - distributed.core - INFO - Starting established connection
-2022-08-26 14:11:49,693 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36083
-2022-08-26 14:11:49,693 - distributed.core - INFO - Starting established connection
-2022-08-26 14:11:49,693 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38057', status: init, memory: 0, processing: 0>
-2022-08-26 14:11:49,694 - distributed.core - INFO - Starting established connection
-2022-08-26 14:11:49,693 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44993
-2022-08-26 14:11:49,694 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:49,694 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38057
-2022-08-26 14:11:49,694 - distributed.core - INFO - Starting established connection
-2022-08-26 14:11:49,694 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33055', status: init, memory: 0, processing: 0>
-2022-08-26 14:11:49,694 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44993
-2022-08-26 14:11:49,694 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:49,694 - distributed.core - INFO - Starting established connection
-2022-08-26 14:11:49,695 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33055
-2022-08-26 14:11:49,695 - distributed.core - INFO - Starting established connection
-2022-08-26 14:11:49,695 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33095', status: init, memory: 0, processing: 0>
-2022-08-26 14:11:49,695 - distributed.core - INFO - Starting established connection
-2022-08-26 14:11:49,695 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44993
-2022-08-26 14:11:49,695 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:49,695 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33095
-2022-08-26 14:11:49,695 - distributed.core - INFO - Starting established connection
-2022-08-26 14:11:49,695 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44993
-2022-08-26 14:11:49,696 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:49,696 - distributed.core - INFO - Starting established connection
-2022-08-26 14:11:49,696 - distributed.core - INFO - Starting established connection
-PASSED
-distributed/tests/test_utils_test.py::test_cluster 2022-08-26 14:11:50,669 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:11:50,672 - distributed.scheduler - INFO - State start
-2022-08-26 14:11:50,675 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:11:50,675 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:36071
-2022-08-26 14:11:50,675 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:11:50,685 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-q3ylfgu8', purging
-2022-08-26 14:11:50,686 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-l4d26tu4', purging
-2022-08-26 14:11:50,686 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-6k9xvb3j', purging
-2022-08-26 14:11:50,686 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-w8mq9kxz', purging
-2022-08-26 14:11:50,686 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-n_cunuz0', purging
-2022-08-26 14:11:50,686 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-kt925d95', purging
-2022-08-26 14:11:50,686 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-lpany5fh', purging
-2022-08-26 14:11:50,686 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-kiophbfc', purging
-2022-08-26 14:11:50,687 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-u_748nu8', purging
-2022-08-26 14:11:50,687 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-jt7nc7pt', purging
-2022-08-26 14:11:50,693 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42121
-2022-08-26 14:11:50,693 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42121
-2022-08-26 14:11:50,693 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34245
-2022-08-26 14:11:50,693 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36071
-2022-08-26 14:11:50,693 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:50,693 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:11:50,693 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:11:50,693 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-jiojh942
-2022-08-26 14:11:50,693 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:50,730 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38215
-2022-08-26 14:11:50,730 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38215
-2022-08-26 14:11:50,730 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34391
-2022-08-26 14:11:50,730 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36071
-2022-08-26 14:11:50,730 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:50,730 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:11:50,730 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:11:50,730 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-y2ddkoi7
-2022-08-26 14:11:50,730 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:50,998 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42121', status: init, memory: 0, processing: 0>
-2022-08-26 14:11:51,272 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42121
-2022-08-26 14:11:51,272 - distributed.core - INFO - Starting established connection
-2022-08-26 14:11:51,272 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36071
-2022-08-26 14:11:51,273 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:51,273 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38215', status: init, memory: 0, processing: 0>
-2022-08-26 14:11:51,273 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38215
-2022-08-26 14:11:51,273 - distributed.core - INFO - Starting established connection
-2022-08-26 14:11:51,273 - distributed.core - INFO - Starting established connection
-2022-08-26 14:11:51,274 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36071
-2022-08-26 14:11:51,274 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:51,274 - distributed.core - INFO - Starting established connection
-PASSED
-distributed/tests/test_utils_test.py::test_gen_cluster PASSED
-distributed/tests/test_utils_test.py::test_gen_cluster_pytest_fixture PASSED
-distributed/tests/test_utils_test.py::test_gen_cluster_parametrized[True] PASSED
-distributed/tests/test_utils_test.py::test_gen_cluster_multi_parametrized[a-True] PASSED
-distributed/tests/test_utils_test.py::test_gen_cluster_multi_parametrized[b-True] PASSED
-distributed/tests/test_utils_test.py::test_gen_cluster_parametrized_variadic_workers[True] PASSED
-distributed/tests/test_utils_test.py::test_gen_cluster_set_config_nanny 2022-08-26 14:11:53,658 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45935
-2022-08-26 14:11:53,658 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45935
-2022-08-26 14:11:53,658 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:11:53,658 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34601
-2022-08-26 14:11:53,658 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37613
-2022-08-26 14:11:53,658 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43985
-2022-08-26 14:11:53,658 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:53,658 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37613
-2022-08-26 14:11:53,658 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:11:53,658 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:11:53,658 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:11:53,658 - distributed.worker - INFO -          dashboard at:            127.0.0.1:44809
-2022-08-26 14:11:53,658 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-2jtqlcv6
-2022-08-26 14:11:53,658 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43985
-2022-08-26 14:11:53,658 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:53,658 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:53,658 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:11:53,658 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:11:53,658 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-14pcxty5
-2022-08-26 14:11:53,658 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:53,964 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43985
-2022-08-26 14:11:53,964 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:53,965 - distributed.core - INFO - Starting established connection
-2022-08-26 14:11:53,965 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43985
-2022-08-26 14:11:53,965 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:53,965 - distributed.core - INFO - Starting established connection
-2022-08-26 14:11:54,066 - distributed.worker - INFO - Run out-of-band function 'assert_config'
-2022-08-26 14:11:54,067 - distributed.worker - INFO - Run out-of-band function 'assert_config'
-2022-08-26 14:11:54,071 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:45935
-2022-08-26 14:11:54,072 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:37613
-2022-08-26 14:11:54,072 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-2832c850-dcac-4931-b8ce-fcaacb749903 Address tcp://127.0.0.1:45935 Status: Status.closing
-2022-08-26 14:11:54,072 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-4660e808-2e8f-4dae-a675-53b644ddd520 Address tcp://127.0.0.1:37613 Status: Status.closing
-PASSED
-distributed/tests/test_utils_test.py::test_gen_cluster_cleans_up_client SKIPPED
-distributed/tests/test_utils_test.py::test_gen_cluster_without_client PASSED
-distributed/tests/test_utils_test.py::test_gen_cluster_tls PASSED
-distributed/tests/test_utils_test.py::test_gen_test XFAIL (Test shou...)
-distributed/tests/test_utils_test.py::test_gen_test_legacy_implicit XFAIL
-distributed/tests/test_utils_test.py::test_gen_test_legacy_explicit XFAIL
-distributed/tests/test_utils_test.py::test_gen_test_parametrized[True] PASSED
-distributed/tests/test_utils_test.py::test_gen_test_double_parametrized[False-True] PASSED
-distributed/tests/test_utils_test.py::test_gen_test_pytest_fixture PASSED
-distributed/tests/test_utils_test.py::test_new_config PASSED
-distributed/tests/test_utils_test.py::test_lingering_client 2022-08-26 14:11:55,251 - distributed.scheduler - INFO - State start
-2022-08-26 14:11:55,252 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:11:55,253 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:40155
-2022-08-26 14:11:55,253 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:40485
-2022-08-26 14:11:55,257 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:44837
-2022-08-26 14:11:55,257 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:44837
-2022-08-26 14:11:55,257 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:11:55,257 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36513
-2022-08-26 14:11:55,258 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:40155
-2022-08-26 14:11:55,258 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:55,258 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:11:55,258 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:11:55,258 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ierrf6rk
-2022-08-26 14:11:55,258 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:55,258 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33437
-2022-08-26 14:11:55,258 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33437
-2022-08-26 14:11:55,258 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:11:55,258 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42695
-2022-08-26 14:11:55,259 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:40155
-2022-08-26 14:11:55,259 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:55,259 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:11:55,259 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:11:55,259 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-1kf2wwds
-2022-08-26 14:11:55,259 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:55,262 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:44837', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:11:55,262 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:44837
-2022-08-26 14:11:55,262 - distributed.core - INFO - Starting established connection
-2022-08-26 14:11:55,262 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33437', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:11:55,263 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33437
-2022-08-26 14:11:55,263 - distributed.core - INFO - Starting established connection
-2022-08-26 14:11:55,263 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:40155
-2022-08-26 14:11:55,263 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:55,263 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:40155
-2022-08-26 14:11:55,263 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:55,263 - distributed.core - INFO - Starting established connection
-2022-08-26 14:11:55,264 - distributed.core - INFO - Starting established connection
-2022-08-26 14:11:55,277 - distributed.scheduler - INFO - Receive client connection: Client-b6989180-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:11:55,277 - distributed.core - INFO - Starting established connection
-2022-08-26 14:11:55,278 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:44837
-2022-08-26 14:11:55,278 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:33437
-2022-08-26 14:11:55,279 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:44837', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:11:55,280 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:44837
-2022-08-26 14:11:55,280 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:33437', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:11:55,280 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:33437
-2022-08-26 14:11:55,280 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:11:55,280 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-29261e65-c609-467f-baba-4f7cd77fbcfd Address tcp://127.0.0.1:44837 Status: Status.closing
-2022-08-26 14:11:55,280 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-7eae1588-1f2c-4cb9-a4f8-272323ebb278 Address tcp://127.0.0.1:33437 Status: Status.closing
-2022-08-26 14:11:55,281 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:11:55,281 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:11:55,384 - distributed.client - ERROR - 
-ConnectionRefusedError: [Errno 111] Connection refused
-
-The above exception was the direct cause of the following exception:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/core.py", line 291, in connect
-    comm = await asyncio.wait_for(
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/tasks.py", line 445, in wait_for
-    return fut.result()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 496, in connect
-    convert_stream_closed_error(self, e)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 142, in convert_stream_closed_error
-    raise CommClosedError(f"in {obj}: {exc.__class__.__name__}: {exc}") from exc
-distributed.comm.core.CommClosedError: in <distributed.comm.tcp.TCPConnector object at 0x56404471bf00>: ConnectionRefusedError: [Errno 111] Connection refused
-
-During handling of the above exception, another exception occurred:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/utils.py", line 799, in wrapper
-    return await func(*args, **kwargs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/client.py", line 1246, in _reconnect
-    await self._ensure_connected(timeout=timeout)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/client.py", line 1276, in _ensure_connected
-    comm = await connect(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/core.py", line 315, in connect
-    await asyncio.sleep(backoff)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/tasks.py", line 605, in sleep
-    return await future
-asyncio.exceptions.CancelledError
-PASSED
-distributed/tests/test_utils_test.py::test_lingering_client_2 2022-08-26 14:11:56,561 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:11:56,563 - distributed.scheduler - INFO - State start
-2022-08-26 14:11:56,566 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:11:56,567 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:46591
-2022-08-26 14:11:56,567 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:11:56,592 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33903
-2022-08-26 14:11:56,592 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33903
-2022-08-26 14:11:56,592 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40165
-2022-08-26 14:11:56,592 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:46591
-2022-08-26 14:11:56,592 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:56,592 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:11:56,592 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:11:56,592 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-5x7fths4
-2022-08-26 14:11:56,592 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:56,623 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45169
-2022-08-26 14:11:56,623 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45169
-2022-08-26 14:11:56,623 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33037
-2022-08-26 14:11:56,623 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:46591
-2022-08-26 14:11:56,623 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:56,623 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:11:56,623 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:11:56,623 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-4d6ynnp1
-2022-08-26 14:11:56,624 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:56,893 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33903', status: init, memory: 0, processing: 0>
-2022-08-26 14:11:57,172 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33903
-2022-08-26 14:11:57,172 - distributed.core - INFO - Starting established connection
-2022-08-26 14:11:57,172 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:46591
-2022-08-26 14:11:57,172 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:57,173 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45169', status: init, memory: 0, processing: 0>
-2022-08-26 14:11:57,173 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45169
-2022-08-26 14:11:57,173 - distributed.core - INFO - Starting established connection
-2022-08-26 14:11:57,173 - distributed.core - INFO - Starting established connection
-2022-08-26 14:11:57,173 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:46591
-2022-08-26 14:11:57,174 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:57,174 - distributed.core - INFO - Starting established connection
-2022-08-26 14:11:57,180 - distributed.scheduler - INFO - Receive client connection: Client-b7bac08d-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:11:57,180 - distributed.core - INFO - Starting established connection
-2022-08-26 14:11:57,291 - distributed.client - ERROR - 
-ConnectionRefusedError: [Errno 111] Connection refused
-
-The above exception was the direct cause of the following exception:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/core.py", line 291, in connect
-    comm = await asyncio.wait_for(
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/tasks.py", line 445, in wait_for
-    return fut.result()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 496, in connect
-    convert_stream_closed_error(self, e)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 142, in convert_stream_closed_error
-    raise CommClosedError(f"in {obj}: {exc.__class__.__name__}: {exc}") from exc
-distributed.comm.core.CommClosedError: in <distributed.comm.tcp.TCPConnector object at 0x7f153008ae20>: ConnectionRefusedError: [Errno 111] Connection refused
-
-During handling of the above exception, another exception occurred:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/utils.py", line 799, in wrapper
-    return await func(*args, **kwargs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/client.py", line 1246, in _reconnect
-    await self._ensure_connected(timeout=timeout)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/client.py", line 1276, in _ensure_connected
-    comm = await connect(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/core.py", line 315, in connect
-    await asyncio.sleep(backoff)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/tasks.py", line 605, in sleep
-    return await future
-asyncio.exceptions.CancelledError
-PASSED
-distributed/tests/test_utils_test.py::test_tls_cluster 2022-08-26 14:11:58,248 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:11:58,251 - distributed.scheduler - INFO - State start
-2022-08-26 14:11:58,254 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:11:58,254 - distributed.scheduler - INFO -   Scheduler at:     tls://127.0.0.1:45429
-2022-08-26 14:11:58,254 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:11:58,264 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-5x7fths4', purging
-2022-08-26 14:11:58,264 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-4d6ynnp1', purging
-2022-08-26 14:11:58,271 - distributed.worker - INFO -       Start worker at:      tls://127.0.0.1:37401
-2022-08-26 14:11:58,271 - distributed.worker - INFO -          Listening to:      tls://127.0.0.1:37401
-2022-08-26 14:11:58,271 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46341
-2022-08-26 14:11:58,271 - distributed.worker - INFO - Waiting to connect to:      tls://127.0.0.1:45429
-2022-08-26 14:11:58,271 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:58,271 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:11:58,271 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:11:58,271 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-vo2_vujt
-2022-08-26 14:11:58,271 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:58,313 - distributed.worker - INFO -       Start worker at:      tls://127.0.0.1:41883
-2022-08-26 14:11:58,313 - distributed.worker - INFO -          Listening to:      tls://127.0.0.1:41883
-2022-08-26 14:11:58,313 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45585
-2022-08-26 14:11:58,313 - distributed.worker - INFO - Waiting to connect to:      tls://127.0.0.1:45429
-2022-08-26 14:11:58,313 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:58,313 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:11:58,313 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:11:58,313 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-lryp4_iq
-2022-08-26 14:11:58,313 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:58,578 - distributed.scheduler - INFO - Register worker <WorkerState 'tls://127.0.0.1:37401', status: init, memory: 0, processing: 0>
-2022-08-26 14:11:58,855 - distributed.scheduler - INFO - Starting worker compute stream, tls://127.0.0.1:37401
-2022-08-26 14:11:58,855 - distributed.core - INFO - Starting established connection
-2022-08-26 14:11:58,855 - distributed.worker - INFO -         Registered to:      tls://127.0.0.1:45429
-2022-08-26 14:11:58,855 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:58,856 - distributed.scheduler - INFO - Register worker <WorkerState 'tls://127.0.0.1:41883', status: init, memory: 0, processing: 0>
-2022-08-26 14:11:58,856 - distributed.scheduler - INFO - Starting worker compute stream, tls://127.0.0.1:41883
-2022-08-26 14:11:58,856 - distributed.core - INFO - Starting established connection
-2022-08-26 14:11:58,856 - distributed.core - INFO - Starting established connection
-2022-08-26 14:11:58,856 - distributed.worker - INFO -         Registered to:      tls://127.0.0.1:45429
-2022-08-26 14:11:58,856 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:58,857 - distributed.core - INFO - Starting established connection
-2022-08-26 14:11:58,869 - distributed.scheduler - INFO - Receive client connection: Client-b8bbf827-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:11:58,869 - distributed.core - INFO - Starting established connection
-PASSED2022-08-26 14:11:58,893 - distributed.scheduler - INFO - Remove client Client-b8bbf827-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:11:58,893 - distributed.scheduler - INFO - Remove client Client-b8bbf827-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:11:58,893 - distributed.scheduler - INFO - Close client connection: Client-b8bbf827-2583-11ed-a99d-00d861bc4509
-
-distributed/tests/test_utils_test.py::test_tls_scheduler 2022-08-26 14:11:58,927 - distributed.scheduler - INFO - State start
-2022-08-26 14:11:58,929 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:11:58,929 - distributed.scheduler - INFO -   Scheduler at:     tls://127.0.0.1:40361
-2022-08-26 14:11:58,929 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:40875
-2022-08-26 14:11:58,930 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:11:58,930 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_utils_test.py::test__UnhashableCallable PASSED
-distributed/tests/test_utils_test.py::test_locked_comm_drop_in_replacement 2022-08-26 14:11:58,943 - distributed.core - INFO - Removing comms to tcp://192.168.1.159:43979
-PASSED
-distributed/tests/test_utils_test.py::test_locked_comm_intercept_read PASSED
-distributed/tests/test_utils_test.py::test_locked_comm_intercept_write PASSED
-distributed/tests/test_utils_test.py::test_dump_cluster_state_timeout SKIPPED
-distributed/tests/test_utils_test.py::test_assert_story PASSED
-distributed/tests/test_utils_test.py::test_assert_story_malformed_story[Missing payload, stimulus_id, ts] PASSED
-distributed/tests/test_utils_test.py::test_assert_story_malformed_story[Missing (stimulus_id, ts)] PASSED
-distributed/tests/test_utils_test.py::test_assert_story_malformed_story[Missing ts] PASSED
-distributed/tests/test_utils_test.py::test_assert_story_malformed_story[ts is not a float] PASSED
-distributed/tests/test_utils_test.py::test_assert_story_malformed_story[ts is in the future] PASSED
-distributed/tests/test_utils_test.py::test_assert_story_malformed_story[ts is too old] PASSED
-distributed/tests/test_utils_test.py::test_assert_story_malformed_story[stimulus_id is not a str] PASSED
-distributed/tests/test_utils_test.py::test_assert_story_malformed_story[stimulus_id is an empty str] PASSED
-distributed/tests/test_utils_test.py::test_assert_story_malformed_story[no payload] PASSED
-distributed/tests/test_utils_test.py::test_assert_story_malformed_story[timestamps out of order] PASSED
-distributed/tests/test_utils_test.py::test_assert_story_identity[True] 2022-08-26 14:11:59,007 - distributed.scheduler - INFO - State start
-2022-08-26 14:11:59,010 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:11:59,010 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:38647
-2022-08-26 14:11:59,010 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:44145
-2022-08-26 14:11:59,011 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-lryp4_iq', purging
-2022-08-26 14:11:59,011 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-vo2_vujt', purging
-2022-08-26 14:11:59,013 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35869
-2022-08-26 14:11:59,013 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35869
-2022-08-26 14:11:59,013 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:11:59,013 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39439
-2022-08-26 14:11:59,014 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:38647
-2022-08-26 14:11:59,014 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:59,014 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:11:59,014 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:11:59,014 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-mhafrgyq
-2022-08-26 14:11:59,014 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:59,015 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:35869', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:11:59,016 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:35869
-2022-08-26 14:11:59,016 - distributed.core - INFO - Starting established connection
-2022-08-26 14:11:59,016 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:38647
-2022-08-26 14:11:59,016 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:59,016 - distributed.core - INFO - Starting established connection
-2022-08-26 14:11:59,030 - distributed.scheduler - INFO - Receive client connection: Client-b8d5253f-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:11:59,030 - distributed.core - INFO - Starting established connection
-2022-08-26 14:11:59,061 - distributed.scheduler - INFO - Remove client Client-b8d5253f-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:11:59,062 - distributed.scheduler - INFO - Remove client Client-b8d5253f-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:11:59,062 - distributed.scheduler - INFO - Close client connection: Client-b8d5253f-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:11:59,062 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:35869
-2022-08-26 14:11:59,063 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:35869', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:11:59,063 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:35869
-2022-08-26 14:11:59,063 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:11:59,063 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-d35ea549-23a3-485b-876f-ed8935f6f0ec Address tcp://127.0.0.1:35869 Status: Status.closing
-2022-08-26 14:11:59,064 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:11:59,064 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_utils_test.py::test_assert_story_identity[False] 2022-08-26 14:11:59,292 - distributed.scheduler - INFO - State start
-2022-08-26 14:11:59,294 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:11:59,294 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:44949
-2022-08-26 14:11:59,294 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:36353
-2022-08-26 14:11:59,297 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:41037
-2022-08-26 14:11:59,297 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:41037
-2022-08-26 14:11:59,297 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:11:59,297 - distributed.worker - INFO -          dashboard at:            127.0.0.1:44539
-2022-08-26 14:11:59,297 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44949
-2022-08-26 14:11:59,297 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:59,297 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:11:59,297 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:11:59,297 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-yn7u7sfv
-2022-08-26 14:11:59,297 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:59,299 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:41037', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:11:59,300 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:41037
-2022-08-26 14:11:59,300 - distributed.core - INFO - Starting established connection
-2022-08-26 14:11:59,300 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44949
-2022-08-26 14:11:59,300 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:59,300 - distributed.core - INFO - Starting established connection
-2022-08-26 14:11:59,313 - distributed.scheduler - INFO - Receive client connection: Client-b90071c1-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:11:59,314 - distributed.core - INFO - Starting established connection
-2022-08-26 14:11:59,345 - distributed.scheduler - INFO - Remove client Client-b90071c1-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:11:59,346 - distributed.scheduler - INFO - Remove client Client-b90071c1-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:11:59,346 - distributed.scheduler - INFO - Close client connection: Client-b90071c1-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:11:59,346 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:41037
-2022-08-26 14:11:59,347 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:41037', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:11:59,347 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:41037
-2022-08-26 14:11:59,347 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:11:59,347 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-ae0b7edf-439c-4614-b01d-550efc801284 Address tcp://127.0.0.1:41037 Status: Status.closing
-2022-08-26 14:11:59,348 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:11:59,348 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_utils_test.py::test_dump_cluster_state 2022-08-26 14:11:59,576 - distributed.scheduler - INFO - State start
-2022-08-26 14:11:59,578 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:11:59,578 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:32981
-2022-08-26 14:11:59,578 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:34745
-2022-08-26 14:11:59,583 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36681
-2022-08-26 14:11:59,583 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36681
-2022-08-26 14:11:59,583 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:11:59,583 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33927
-2022-08-26 14:11:59,583 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:32981
-2022-08-26 14:11:59,583 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:59,583 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:11:59,583 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:11:59,583 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-imlzo9vj
-2022-08-26 14:11:59,583 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:59,584 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:44183
-2022-08-26 14:11:59,584 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:44183
-2022-08-26 14:11:59,584 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:11:59,584 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33383
-2022-08-26 14:11:59,584 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:32981
-2022-08-26 14:11:59,584 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:59,584 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:11:59,584 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:11:59,584 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-vaystco_
-2022-08-26 14:11:59,584 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:59,587 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36681', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:11:59,587 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36681
-2022-08-26 14:11:59,587 - distributed.core - INFO - Starting established connection
-2022-08-26 14:11:59,588 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:44183', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:11:59,588 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:44183
-2022-08-26 14:11:59,588 - distributed.core - INFO - Starting established connection
-2022-08-26 14:11:59,588 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:32981
-2022-08-26 14:11:59,588 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:59,589 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:32981
-2022-08-26 14:11:59,589 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:11:59,589 - distributed.core - INFO - Starting established connection
-2022-08-26 14:11:59,589 - distributed.core - INFO - Starting established connection
-2022-08-26 14:11:59,761 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36681
-2022-08-26 14:11:59,762 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:44183
-2022-08-26 14:11:59,763 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:36681', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:11:59,763 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:36681
-2022-08-26 14:11:59,763 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:44183', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:11:59,763 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:44183
-2022-08-26 14:11:59,763 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:11:59,763 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-adbb636c-914d-4d56-9126-fe2be7139518 Address tcp://127.0.0.1:36681 Status: Status.closing
-2022-08-26 14:11:59,763 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-db0f0b1b-fad7-4fb3-a9b1-97ab55ef3e4c Address tcp://127.0.0.1:44183 Status: Status.closing
-2022-08-26 14:11:59,764 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:11:59,764 - distributed.scheduler - INFO - Scheduler closing all comms
-Dumped cluster state to /tmp/pytest-of-matthew/pytest-12/test_dump_cluster_state0/dump.yaml
-PASSED
-distributed/tests/test_utils_test.py::test_dump_cluster_state_no_workers 2022-08-26 14:11:59,991 - distributed.scheduler - INFO - State start
-2022-08-26 14:11:59,993 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:11:59,993 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:43807
-2022-08-26 14:11:59,993 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:34337
-2022-08-26 14:12:00,017 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:12:00,017 - distributed.scheduler - INFO - Scheduler closing all comms
-Dumped cluster state to /tmp/pytest-of-matthew/pytest-12/test_dump_cluster_state_no_wor0/dump.yaml
-PASSED
-distributed/tests/test_utils_test.py::test_dump_cluster_state_nannies 2022-08-26 14:12:00,243 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:00,245 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:00,245 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:36959
-2022-08-26 14:12:00,245 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:46735
-2022-08-26 14:12:00,250 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:34467'
-2022-08-26 14:12:00,251 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:38723'
-2022-08-26 14:12:00,985 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45211
-2022-08-26 14:12:00,986 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45211
-2022-08-26 14:12:00,986 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:12:00,986 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43755
-2022-08-26 14:12:00,986 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36959
-2022-08-26 14:12:00,986 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:00,986 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:00,986 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:00,986 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-klvoli_j
-2022-08-26 14:12:00,986 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:01,001 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39805
-2022-08-26 14:12:01,001 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39805
-2022-08-26 14:12:01,001 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:12:01,001 - distributed.worker - INFO -          dashboard at:            127.0.0.1:44857
-2022-08-26 14:12:01,001 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36959
-2022-08-26 14:12:01,001 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:01,001 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:12:01,001 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:01,001 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ok76o0o6
-2022-08-26 14:12:01,001 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:01,287 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45211', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:01,288 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45211
-2022-08-26 14:12:01,288 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:01,288 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36959
-2022-08-26 14:12:01,288 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:01,288 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:01,289 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39805', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:01,290 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39805
-2022-08-26 14:12:01,290 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:01,290 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36959
-2022-08-26 14:12:01,290 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:01,290 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:01,491 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:34467'.
-2022-08-26 14:12:01,491 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:12:01,492 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:38723'.
-2022-08-26 14:12:01,492 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:12:01,492 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:45211
-2022-08-26 14:12:01,493 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:39805
-2022-08-26 14:12:01,493 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-89dde608-5712-404f-a888-494fdf5ea931 Address tcp://127.0.0.1:45211 Status: Status.closing
-2022-08-26 14:12:01,493 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:45211', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:01,493 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:45211
-2022-08-26 14:12:01,494 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-3cfa5541-81df-4d25-b687-17d7b2d055b4 Address tcp://127.0.0.1:39805 Status: Status.closing
-2022-08-26 14:12:01,494 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:39805', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:01,494 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:39805
-2022-08-26 14:12:01,494 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:12:01,621 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:12:01,621 - distributed.scheduler - INFO - Scheduler closing all comms
-Dumped cluster state to /tmp/pytest-of-matthew/pytest-12/test_dump_cluster_state_nannie0/dump.yaml
-PASSED
-distributed/tests/test_utils_test.py::test_dump_cluster_state_unresponsive_local_worker 2022-08-26 14:12:01,848 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:01,850 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:01,850 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:39169
-2022-08-26 14:12:01,850 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:45243
-2022-08-26 14:12:01,855 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:46857
-2022-08-26 14:12:01,855 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:46857
-2022-08-26 14:12:01,855 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:12:01,855 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40461
-2022-08-26 14:12:01,855 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39169
-2022-08-26 14:12:01,855 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:01,855 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:01,855 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:01,855 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-lg3dj_xc
-2022-08-26 14:12:01,855 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:01,856 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34319
-2022-08-26 14:12:01,856 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34319
-2022-08-26 14:12:01,856 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:12:01,856 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42367
-2022-08-26 14:12:01,856 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39169
-2022-08-26 14:12:01,856 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:01,856 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:12:01,856 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:01,856 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-jt70wntr
-2022-08-26 14:12:01,856 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:01,859 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:46857', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:01,859 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:46857
-2022-08-26 14:12:01,859 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:01,860 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:34319', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:01,860 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:34319
-2022-08-26 14:12:01,860 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:01,860 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39169
-2022-08-26 14:12:01,860 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:01,861 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39169
-2022-08-26 14:12:01,861 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:01,861 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:01,861 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:02,030 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:46857
-2022-08-26 14:12:02,031 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:34319
-2022-08-26 14:12:02,032 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:46857', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:02,032 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:46857
-2022-08-26 14:12:02,032 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:34319', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:02,032 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:34319
-2022-08-26 14:12:02,032 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:12:02,032 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-65b10d05-1008-4ab9-9613-2906f119da14 Address tcp://127.0.0.1:46857 Status: Status.closing
-2022-08-26 14:12:02,032 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-494e6b6f-ee34-4f34-a883-b213b991419a Address tcp://127.0.0.1:34319 Status: Status.closing
-2022-08-26 14:12:02,033 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:12:02,033 - distributed.scheduler - INFO - Scheduler closing all comms
-Dumped cluster state to /tmp/pytest-of-matthew/pytest-12/test_dump_cluster_state_unresp0/dump.yaml
-PASSED
-distributed/tests/test_utils_test.py::test_dump_cluster_unresponsive_remote_worker SKIPPED
-distributed/tests/test_utils_test.py::test_check_process_leak PASSED
-distributed/tests/test_utils_test.py::test_check_process_leak_slow_cleanup PASSED
-distributed/tests/test_utils_test.py::test_check_process_leak_pre_cleanup[False] PASSED
-distributed/tests/test_utils_test.py::test_check_process_leak_pre_cleanup[True] PASSED
-distributed/tests/test_utils_test.py::test_check_process_leak_post_cleanup[False] PASSED
-distributed/tests/test_utils_test.py::test_check_process_leak_post_cleanup[True] PASSED
-distributed/tests/test_utils_test.py::test_start_failure_worker[True] 2022-08-26 14:12:09,581 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:12:09,583 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:09,586 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:09,586 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:41651
-2022-08-26 14:12:09,586 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:12:09,606 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:45145'
-2022-08-26 14:12:09,641 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:33271'
-2022-08-26 14:12:10,339 - distributed.nanny - ERROR - Failed to initialize Worker
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/nanny.py", line 853, in _run
-    worker = Worker(**worker_kwargs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 729, in __init__
-    ServerNode.__init__(
-TypeError: Server.__init__() got an unexpected keyword argument 'foo'
-2022-08-26 14:12:10,377 - distributed.nanny - ERROR - Failed to initialize Worker
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/nanny.py", line 853, in _run
-    worker = Worker(**worker_kwargs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 729, in __init__
-    ServerNode.__init__(
-TypeError: Server.__init__() got an unexpected keyword argument 'foo'
-2022-08-26 14:12:10,388 - distributed.nanny - ERROR - Failed to start process
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/nanny.py", line 438, in instantiate
-    result = await self.process.start()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/nanny.py", line 695, in start
-    msg = await self._wait_until_connected(uid)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/nanny.py", line 823, in _wait_until_connected
-    raise msg["exception"]
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/nanny.py", line 853, in _run
-    worker = Worker(**worker_kwargs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 729, in __init__
-    ServerNode.__init__(
-TypeError: Server.__init__() got an unexpected keyword argument 'foo'
-2022-08-26 14:12:10,389 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:45145'.
-2022-08-26 14:12:10,389 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:12:10,390 - distributed.nanny - INFO - Worker process 654482 was killed by signal 15
-PASSED
-distributed/tests/test_utils_test.py::test_start_failure_worker[False] 2022-08-26 14:12:11,356 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:12:11,358 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:11,362 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:11,362 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:36173
-2022-08-26 14:12:11,362 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:12:11,367 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-gw6jqaq6', purging
-2022-08-26 14:12:11,367 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-lpnvhlv0', purging
-PASSED
-distributed/tests/test_utils_test.py::test_start_failure_scheduler 2022-08-26 14:12:12,339 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:12:12,341 - distributed.scheduler - INFO - State start
-PASSED
-distributed/tests/test_utils_test.py::test_invalid_transitions 2022-08-26 14:12:12,352 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:12,354 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:12,354 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:33879
-2022-08-26 14:12:12,354 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:42125
-2022-08-26 14:12:12,357 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39145
-2022-08-26 14:12:12,357 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39145
-2022-08-26 14:12:12,358 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:12:12,358 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40279
-2022-08-26 14:12:12,358 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33879
-2022-08-26 14:12:12,358 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:12,358 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:12,358 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:12,358 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-8etd8avg
-2022-08-26 14:12:12,358 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:12,360 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39145', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:12,360 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39145
-2022-08-26 14:12:12,360 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:12,360 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33879
-2022-08-26 14:12:12,360 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:12,361 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:12,374 - distributed.scheduler - INFO - Receive client connection: Client-c0c951e7-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:12,374 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:12,390 - distributed.worker - ERROR - InvalidTransition: task-name :: memory->foo
-  Story:
-    ('task-name', 'compute-task', 'released', 'compute-task-1661548332.3859217', 1661548332.386285)
-    ('task-name', 'released', 'waiting', 'waiting', {'task-name': 'ready'}, 'compute-task-1661548332.3859217', 1661548332.386304)
-    ('task-name', 'waiting', 'ready', 'ready', {'task-name': 'executing'}, 'compute-task-1661548332.3859217', 1661548332.3863218)
-    ('task-name', 'ready', 'executing', 'executing', {}, 'compute-task-1661548332.3859217', 1661548332.386334)
-    ('task-name', 'put-in-memory', 'task-finished-1661548332.38666', 1661548332.3867376)
-    ('task-name', 'executing', 'memory', 'memory', {}, 'task-finished-1661548332.38666', 1661548332.3867614)
-    ('task-name', 'memory', 'released', 'released', {'task-name': 'forgotten'}, 'test', 1661548332.3898864)
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 2450, in _transition
-    self._transition(ts, finish, *args, stimulus_id=stimulus_id),
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 2456, in _transition
-    raise InvalidTransition(ts.key, start, finish, self.story(ts))
-distributed.worker_state_machine.InvalidTransition: InvalidTransition: task-name :: released->foo
-  Story:
-    ('task-name', 'compute-task', 'released', 'compute-task-1661548332.3859217', 1661548332.386285)
-    ('task-name', 'released', 'waiting', 'waiting', {'task-name': 'ready'}, 'compute-task-1661548332.3859217', 1661548332.386304)
-    ('task-name', 'waiting', 'ready', 'ready', {'task-name': 'executing'}, 'compute-task-1661548332.3859217', 1661548332.3863218)
-    ('task-name', 'ready', 'executing', 'executing', {}, 'compute-task-1661548332.3859217', 1661548332.386334)
-    ('task-name', 'put-in-memory', 'task-finished-1661548332.38666', 1661548332.3867376)
-    ('task-name', 'executing', 'memory', 'memory', {}, 'task-finished-1661548332.38666', 1661548332.3867614)
-    ('task-name', 'memory', 'released', 'released', {'task-name': 'forgotten'}, 'test', 1661548332.3898864)
-
-The above exception was the direct cause of the following exception:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 192, in wrapper
-    return method(self, *args, **kwargs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 1868, in handle_stimulus
-    super().handle_stimulus(*stims)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 3384, in handle_stimulus
-    instructions = self.state.handle_stimulus(*stims)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 1301, in handle_stimulus
-    instructions += self._transitions(recs, stimulus_id=stim.stimulus_id)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 2512, in _transitions
-    process_recs(recommendations.copy())
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 2506, in process_recs
-    a_recs, a_instructions = self._transition(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 2453, in _transition
-    raise InvalidTransition(ts.key, start, finish, self.story(ts)) from e
-distributed.worker_state_machine.InvalidTransition: InvalidTransition: task-name :: memory->foo
-  Story:
-    ('task-name', 'compute-task', 'released', 'compute-task-1661548332.3859217', 1661548332.386285)
-    ('task-name', 'released', 'waiting', 'waiting', {'task-name': 'ready'}, 'compute-task-1661548332.3859217', 1661548332.386304)
-    ('task-name', 'waiting', 'ready', 'ready', {'task-name': 'executing'}, 'compute-task-1661548332.3859217', 1661548332.3863218)
-    ('task-name', 'ready', 'executing', 'executing', {}, 'compute-task-1661548332.3859217', 1661548332.386334)
-    ('task-name', 'put-in-memory', 'task-finished-1661548332.38666', 1661548332.3867376)
-    ('task-name', 'executing', 'memory', 'memory', {}, 'task-finished-1661548332.38666', 1661548332.3867614)
-    ('task-name', 'memory', 'released', 'released', {'task-name': 'forgotten'}, 'test', 1661548332.3898864)
-2022-08-26 14:12:12,391 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:39145
-2022-08-26 14:12:12,391 - distributed.worker - INFO - Not waiting on executor to close
-2022-08-26 14:12:12,392 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-8c02549a-5d5a-4578-8708-eda4f203d5db Address tcp://127.0.0.1:39145 Status: Status.closing
-2022-08-26 14:12:12,392 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:39145', name: 0, status: closing, memory: 1, processing: 0>
-2022-08-26 14:12:12,393 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:39145
-2022-08-26 14:12:12,393 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:12:12,402 - distributed.scheduler - INFO - Remove client Client-c0c951e7-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:12,402 - distributed.scheduler - INFO - Remove client Client-c0c951e7-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:12,402 - distributed.scheduler - INFO - Close client connection: Client-c0c951e7-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:12,403 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:12:12,403 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_utils_test.py::test_invalid_worker_state 2022-08-26 14:12:12,409 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:12,411 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:12,411 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:40757
-2022-08-26 14:12:12,411 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:41907
-2022-08-26 14:12:12,414 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35883
-2022-08-26 14:12:12,414 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35883
-2022-08-26 14:12:12,414 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:12:12,414 - distributed.worker - INFO -          dashboard at:            127.0.0.1:37601
-2022-08-26 14:12:12,414 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:40757
-2022-08-26 14:12:12,414 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:12,414 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:12,414 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:12,414 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-h3qguznv
-2022-08-26 14:12:12,414 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:12,416 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:35883', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:12,416 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:35883
-2022-08-26 14:12:12,416 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:12,416 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:40757
-2022-08-26 14:12:12,417 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:12,417 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:12,430 - distributed.scheduler - INFO - Receive client connection: Client-c0d1e3ed-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:12,430 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:12,447 - distributed.worker_state_machine - ERROR - 
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 3250, in validate_task
-    self._validate_task_released(ts)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 3206, in _validate_task_released
-    assert ts.key not in self.data
-AssertionError
-2022-08-26 14:12:12,447 - distributed.worker - ERROR - Validate state failed
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 3250, in validate_task
-    self._validate_task_released(ts)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 3206, in _validate_task_released
-    assert ts.key not in self.data
-AssertionError
-
-The above exception was the direct cause of the following exception:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 2521, in validate_state
-    self.state.validate_state()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 3327, in validate_state
-    self.validate_task(ts)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 3253, in validate_task
-    raise InvalidTaskState(
-distributed.worker_state_machine.InvalidTaskState: InvalidTaskState: task-name :: released
-  Story:
-    ('task-name', 'compute-task', 'released', 'compute-task-1661548332.4435086', 1661548332.4438424)
-    ('task-name', 'released', 'waiting', 'waiting', {'task-name': 'ready'}, 'compute-task-1661548332.4435086', 1661548332.4438608)
-    ('task-name', 'waiting', 'ready', 'ready', {'task-name': 'executing'}, 'compute-task-1661548332.4435086', 1661548332.4438777)
-    ('task-name', 'ready', 'executing', 'executing', {}, 'compute-task-1661548332.4435086', 1661548332.443889)
-    ('task-name', 'put-in-memory', 'task-finished-1661548332.4442062', 1661548332.4442852)
-    ('task-name', 'executing', 'memory', 'memory', {}, 'task-finished-1661548332.4442062', 1661548332.4443083)
-2022-08-26 14:12:12,447 - distributed.worker - ERROR - InvalidTaskState: task-name :: released
-  Story:
-    ('task-name', 'compute-task', 'released', 'compute-task-1661548332.4435086', 1661548332.4438424)
-    ('task-name', 'released', 'waiting', 'waiting', {'task-name': 'ready'}, 'compute-task-1661548332.4435086', 1661548332.4438608)
-    ('task-name', 'waiting', 'ready', 'ready', {'task-name': 'executing'}, 'compute-task-1661548332.4435086', 1661548332.4438777)
-    ('task-name', 'ready', 'executing', 'executing', {}, 'compute-task-1661548332.4435086', 1661548332.443889)
-    ('task-name', 'put-in-memory', 'task-finished-1661548332.4442062', 1661548332.4442852)
-    ('task-name', 'executing', 'memory', 'memory', {}, 'task-finished-1661548332.4442062', 1661548332.4443083)
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 3250, in validate_task
-    self._validate_task_released(ts)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 3206, in _validate_task_released
-    assert ts.key not in self.data
-AssertionError
-
-The above exception was the direct cause of the following exception:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 2521, in validate_state
-    self.state.validate_state()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 3327, in validate_state
-    self.validate_task(ts)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 3253, in validate_task
-    raise InvalidTaskState(
-distributed.worker_state_machine.InvalidTaskState: InvalidTaskState: task-name :: released
-  Story:
-    ('task-name', 'compute-task', 'released', 'compute-task-1661548332.4435086', 1661548332.4438424)
-    ('task-name', 'released', 'waiting', 'waiting', {'task-name': 'ready'}, 'compute-task-1661548332.4435086', 1661548332.4438608)
-    ('task-name', 'waiting', 'ready', 'ready', {'task-name': 'executing'}, 'compute-task-1661548332.4435086', 1661548332.4438777)
-    ('task-name', 'ready', 'executing', 'executing', {}, 'compute-task-1661548332.4435086', 1661548332.443889)
-    ('task-name', 'put-in-memory', 'task-finished-1661548332.4442062', 1661548332.4442852)
-    ('task-name', 'executing', 'memory', 'memory', {}, 'task-finished-1661548332.4442062', 1661548332.4443083)
-2022-08-26 14:12:12,458 - distributed.worker_state_machine - ERROR - 
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 3250, in validate_task
-    self._validate_task_released(ts)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 3206, in _validate_task_released
-    assert ts.key not in self.data
-AssertionError
-2022-08-26 14:12:12,458 - distributed.worker - ERROR - Validate state failed
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 3250, in validate_task
-    self._validate_task_released(ts)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 3206, in _validate_task_released
-    assert ts.key not in self.data
-AssertionError
-
-The above exception was the direct cause of the following exception:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 2521, in validate_state
-    self.state.validate_state()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 3327, in validate_state
-    self.validate_task(ts)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 3253, in validate_task
-    raise InvalidTaskState(
-distributed.worker_state_machine.InvalidTaskState: InvalidTaskState: task-name :: released
-  Story:
-    ('task-name', 'compute-task', 'released', 'compute-task-1661548332.4435086', 1661548332.4438424)
-    ('task-name', 'released', 'waiting', 'waiting', {'task-name': 'ready'}, 'compute-task-1661548332.4435086', 1661548332.4438608)
-    ('task-name', 'waiting', 'ready', 'ready', {'task-name': 'executing'}, 'compute-task-1661548332.4435086', 1661548332.4438777)
-    ('task-name', 'ready', 'executing', 'executing', {}, 'compute-task-1661548332.4435086', 1661548332.443889)
-    ('task-name', 'put-in-memory', 'task-finished-1661548332.4442062', 1661548332.4442852)
-    ('task-name', 'executing', 'memory', 'memory', {}, 'task-finished-1661548332.4442062', 1661548332.4443083)
-2022-08-26 14:12:12,458 - distributed.worker - ERROR - InvalidTaskState: task-name :: released
-  Story:
-    ('task-name', 'compute-task', 'released', 'compute-task-1661548332.4435086', 1661548332.4438424)
-    ('task-name', 'released', 'waiting', 'waiting', {'task-name': 'ready'}, 'compute-task-1661548332.4435086', 1661548332.4438608)
-    ('task-name', 'waiting', 'ready', 'ready', {'task-name': 'executing'}, 'compute-task-1661548332.4435086', 1661548332.4438777)
-    ('task-name', 'ready', 'executing', 'executing', {}, 'compute-task-1661548332.4435086', 1661548332.443889)
-    ('task-name', 'put-in-memory', 'task-finished-1661548332.4442062', 1661548332.4442852)
-    ('task-name', 'executing', 'memory', 'memory', {}, 'task-finished-1661548332.4442062', 1661548332.4443083)
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 3250, in validate_task
-    self._validate_task_released(ts)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 3206, in _validate_task_released
-    assert ts.key not in self.data
-AssertionError
-
-The above exception was the direct cause of the following exception:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 2521, in validate_state
-    self.state.validate_state()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 3327, in validate_state
-    self.validate_task(ts)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 3253, in validate_task
-    raise InvalidTaskState(
-distributed.worker_state_machine.InvalidTaskState: InvalidTaskState: task-name :: released
-  Story:
-    ('task-name', 'compute-task', 'released', 'compute-task-1661548332.4435086', 1661548332.4438424)
-    ('task-name', 'released', 'waiting', 'waiting', {'task-name': 'ready'}, 'compute-task-1661548332.4435086', 1661548332.4438608)
-    ('task-name', 'waiting', 'ready', 'ready', {'task-name': 'executing'}, 'compute-task-1661548332.4435086', 1661548332.4438777)
-    ('task-name', 'ready', 'executing', 'executing', {}, 'compute-task-1661548332.4435086', 1661548332.443889)
-    ('task-name', 'put-in-memory', 'task-finished-1661548332.4442062', 1661548332.4442852)
-    ('task-name', 'executing', 'memory', 'memory', {}, 'task-finished-1661548332.4442062', 1661548332.4443083)
-2022-08-26 14:12:12,519 - distributed.scheduler - INFO - Remove client Client-c0d1e3ed-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:12,519 - distributed.scheduler - INFO - Remove client Client-c0d1e3ed-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:12,519 - distributed.scheduler - INFO - Close client connection: Client-c0d1e3ed-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:12,520 - distributed.worker_state_machine - ERROR - 
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 3250, in validate_task
-    self._validate_task_released(ts)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 3206, in _validate_task_released
-    assert ts.key not in self.data
-AssertionError
-2022-08-26 14:12:12,520 - distributed.worker - ERROR - InvalidTaskState: task-name :: released
-  Story:
-    ('task-name', 'compute-task', 'released', 'compute-task-1661548332.4435086', 1661548332.4438424)
-    ('task-name', 'released', 'waiting', 'waiting', {'task-name': 'ready'}, 'compute-task-1661548332.4435086', 1661548332.4438608)
-    ('task-name', 'waiting', 'ready', 'ready', {'task-name': 'executing'}, 'compute-task-1661548332.4435086', 1661548332.4438777)
-    ('task-name', 'ready', 'executing', 'executing', {}, 'compute-task-1661548332.4435086', 1661548332.443889)
-    ('task-name', 'put-in-memory', 'task-finished-1661548332.4442062', 1661548332.4442852)
-    ('task-name', 'executing', 'memory', 'memory', {}, 'task-finished-1661548332.4442062', 1661548332.4443083)
-    ('free-keys', ('task-name',), 'remove-client-1661548332.5192206', 1661548332.519983)
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 3250, in validate_task
-    self._validate_task_released(ts)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 3206, in _validate_task_released
-    assert ts.key not in self.data
-AssertionError
-
-The above exception was the direct cause of the following exception:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 192, in wrapper
-    return method(self, *args, **kwargs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 1868, in handle_stimulus
-    super().handle_stimulus(*stims)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 3384, in handle_stimulus
-    instructions = self.state.handle_stimulus(*stims)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 1301, in handle_stimulus
-    instructions += self._transitions(recs, stimulus_id=stim.stimulus_id)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 2528, in _transitions
-    self.validate_task(ts)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 3253, in validate_task
-    raise InvalidTaskState(
-distributed.worker_state_machine.InvalidTaskState: InvalidTaskState: task-name :: released
-  Story:
-    ('task-name', 'compute-task', 'released', 'compute-task-1661548332.4435086', 1661548332.4438424)
-    ('task-name', 'released', 'waiting', 'waiting', {'task-name': 'ready'}, 'compute-task-1661548332.4435086', 1661548332.4438608)
-    ('task-name', 'waiting', 'ready', 'ready', {'task-name': 'executing'}, 'compute-task-1661548332.4435086', 1661548332.4438777)
-    ('task-name', 'ready', 'executing', 'executing', {}, 'compute-task-1661548332.4435086', 1661548332.443889)
-    ('task-name', 'put-in-memory', 'task-finished-1661548332.4442062', 1661548332.4442852)
-    ('task-name', 'executing', 'memory', 'memory', {}, 'task-finished-1661548332.4442062', 1661548332.4443083)
-    ('free-keys', ('task-name',), 'remove-client-1661548332.5192206', 1661548332.519983)
-2022-08-26 14:12:12,521 - distributed.core - ERROR - InvalidTaskState: task-name :: released
-  Story:
-    ('task-name', 'compute-task', 'released', 'compute-task-1661548332.4435086', 1661548332.4438424)
-    ('task-name', 'released', 'waiting', 'waiting', {'task-name': 'ready'}, 'compute-task-1661548332.4435086', 1661548332.4438608)
-    ('task-name', 'waiting', 'ready', 'ready', {'task-name': 'executing'}, 'compute-task-1661548332.4435086', 1661548332.4438777)
-    ('task-name', 'ready', 'executing', 'executing', {}, 'compute-task-1661548332.4435086', 1661548332.443889)
-    ('task-name', 'put-in-memory', 'task-finished-1661548332.4442062', 1661548332.4442852)
-    ('task-name', 'executing', 'memory', 'memory', {}, 'task-finished-1661548332.4442062', 1661548332.4443083)
-    ('free-keys', ('task-name',), 'remove-client-1661548332.5192206', 1661548332.519983)
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 3250, in validate_task
-    self._validate_task_released(ts)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 3206, in _validate_task_released
-    assert ts.key not in self.data
-AssertionError
-
-The above exception was the direct cause of the following exception:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 842, in handle_stream
-    handler(**merge(extra, msg))
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 1843, in _
-    self.handle_stimulus(event)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 192, in wrapper
-    return method(self, *args, **kwargs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 1868, in handle_stimulus
-    super().handle_stimulus(*stims)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 3384, in handle_stimulus
-    instructions = self.state.handle_stimulus(*stims)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 1301, in handle_stimulus
-    instructions += self._transitions(recs, stimulus_id=stim.stimulus_id)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 2528, in _transitions
-    self.validate_task(ts)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 3253, in validate_task
-    raise InvalidTaskState(
-distributed.worker_state_machine.InvalidTaskState: InvalidTaskState: task-name :: released
-  Story:
-    ('task-name', 'compute-task', 'released', 'compute-task-1661548332.4435086', 1661548332.4438424)
-    ('task-name', 'released', 'waiting', 'waiting', {'task-name': 'ready'}, 'compute-task-1661548332.4435086', 1661548332.4438608)
-    ('task-name', 'waiting', 'ready', 'ready', {'task-name': 'executing'}, 'compute-task-1661548332.4435086', 1661548332.4438777)
-    ('task-name', 'ready', 'executing', 'executing', {}, 'compute-task-1661548332.4435086', 1661548332.443889)
-    ('task-name', 'put-in-memory', 'task-finished-1661548332.4442062', 1661548332.4442852)
-    ('task-name', 'executing', 'memory', 'memory', {}, 'task-finished-1661548332.4442062', 1661548332.4443083)
-    ('free-keys', ('task-name',), 'remove-client-1661548332.5192206', 1661548332.519983)
-2022-08-26 14:12:12,522 - distributed.worker - ERROR - InvalidTaskState: task-name :: released
-  Story:
-    ('task-name', 'compute-task', 'released', 'compute-task-1661548332.4435086', 1661548332.4438424)
-    ('task-name', 'released', 'waiting', 'waiting', {'task-name': 'ready'}, 'compute-task-1661548332.4435086', 1661548332.4438608)
-    ('task-name', 'waiting', 'ready', 'ready', {'task-name': 'executing'}, 'compute-task-1661548332.4435086', 1661548332.4438777)
-    ('task-name', 'ready', 'executing', 'executing', {}, 'compute-task-1661548332.4435086', 1661548332.443889)
-    ('task-name', 'put-in-memory', 'task-finished-1661548332.4442062', 1661548332.4442852)
-    ('task-name', 'executing', 'memory', 'memory', {}, 'task-finished-1661548332.4442062', 1661548332.4443083)
-    ('free-keys', ('task-name',), 'remove-client-1661548332.5192206', 1661548332.519983)
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 3250, in validate_task
-    self._validate_task_released(ts)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 3206, in _validate_task_released
-    assert ts.key not in self.data
-AssertionError
-
-The above exception was the direct cause of the following exception:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 179, in wrapper
-    return await method(self, *args, **kwargs)  # type: ignore
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 1210, in handle_scheduler
-    await self.handle_stream(comm)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 842, in handle_stream
-    handler(**merge(extra, msg))
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 1843, in _
-    self.handle_stimulus(event)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 192, in wrapper
-    return method(self, *args, **kwargs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 1868, in handle_stimulus
-    super().handle_stimulus(*stims)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 3384, in handle_stimulus
-    instructions = self.state.handle_stimulus(*stims)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 1301, in handle_stimulus
-    instructions += self._transitions(recs, stimulus_id=stim.stimulus_id)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 2528, in _transitions
-    self.validate_task(ts)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 3253, in validate_task
-    raise InvalidTaskState(
-distributed.worker_state_machine.InvalidTaskState: InvalidTaskState: task-name :: released
-  Story:
-    ('task-name', 'compute-task', 'released', 'compute-task-1661548332.4435086', 1661548332.4438424)
-    ('task-name', 'released', 'waiting', 'waiting', {'task-name': 'ready'}, 'compute-task-1661548332.4435086', 1661548332.4438608)
-    ('task-name', 'waiting', 'ready', 'ready', {'task-name': 'executing'}, 'compute-task-1661548332.4435086', 1661548332.4438777)
-    ('task-name', 'ready', 'executing', 'executing', {}, 'compute-task-1661548332.4435086', 1661548332.443889)
-    ('task-name', 'put-in-memory', 'task-finished-1661548332.4442062', 1661548332.4442852)
-    ('task-name', 'executing', 'memory', 'memory', {}, 'task-finished-1661548332.4442062', 1661548332.4443083)
-    ('free-keys', ('task-name',), 'remove-client-1661548332.5192206', 1661548332.519983)
-2022-08-26 14:12:12,523 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:35883
-2022-08-26 14:12:12,523 - distributed.worker - INFO - Not waiting on executor to close
-2022-08-26 14:12:12,523 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:35883', name: 0, status: running, memory: 0, processing: 0>
-2022-08-26 14:12:12,523 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:35883
-2022-08-26 14:12:12,524 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:12:12,524 - distributed.batched - INFO - Batched Comm Closed <TCP (closed) Worker->Scheduler local=tcp://127.0.0.1:50684 remote=tcp://127.0.0.1:40757>
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/batched.py", line 115, in _background_send
-    nbytes = yield coro
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/tornado/gen.py", line 769, in run
-    value = future.result()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 269, in write
-    raise CommClosedError()
-distributed.comm.core.CommClosedError
-2022-08-26 14:12:12,525 - tornado.application - ERROR - Exception in callback functools.partial(<bound method IOLoop._discard_future_result of <tornado.platform.asyncio.AsyncIOMainLoop object at 0x564040dc16a0>>, <Task finished name='Task-200542' coro=<Worker.handle_scheduler() done, defined at /home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py:176> exception=InvalidTaskState: task-name :: released
-  Story:
-    ('task-name', 'compute-task', 'released', 'compute-task-1661548332.4435086', 1661548332.4438424)
-    ('task-name', 'released', 'waiting', 'waiting', {'task-name': 'ready'}, 'compute-task-1661548332.4435086', 1661548332.4438608)
-    ('task-name', 'waiting', 'ready', 'ready', {'task-name': 'executing'}, 'compute-task-1661548332.4435086', 1661548332.4438777)
-    ('task-name', 'ready', 'executing', 'executing', {}, 'compute-task-1661548332.4435086', 1661548332.443889)
-    ('task-name', 'put-in-memory', 'task-finished-1661548332.4442062', 1661548332.4442852)
-    ('task-name', 'executing', 'memory', 'memory', {}, 'task-finished-1661548332.4442062', 1661548332.4443083)
-    ('free-keys', ('task-name',), 'remove-client-1661548332.5192206', 1661548332.519983)>)
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 3250, in validate_task
-    self._validate_task_released(ts)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 3206, in _validate_task_released
-    assert ts.key not in self.data
-AssertionError
-
-The above exception was the direct cause of the following exception:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/tornado/ioloop.py", line 740, in _run_callback
-    ret = callback()
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/tornado/ioloop.py", line 764, in _discard_future_result
-    future.result()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 179, in wrapper
-    return await method(self, *args, **kwargs)  # type: ignore
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 1210, in handle_scheduler
-    await self.handle_stream(comm)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 842, in handle_stream
-    handler(**merge(extra, msg))
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 1843, in _
-    self.handle_stimulus(event)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 192, in wrapper
-    return method(self, *args, **kwargs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 1868, in handle_stimulus
-    super().handle_stimulus(*stims)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 3384, in handle_stimulus
-    instructions = self.state.handle_stimulus(*stims)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 1301, in handle_stimulus
-    instructions += self._transitions(recs, stimulus_id=stim.stimulus_id)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 2528, in _transitions
-    self.validate_task(ts)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 3253, in validate_task
-    raise InvalidTaskState(
-distributed.worker_state_machine.InvalidTaskState: InvalidTaskState: task-name :: released
-  Story:
-    ('task-name', 'compute-task', 'released', 'compute-task-1661548332.4435086', 1661548332.4438424)
-    ('task-name', 'released', 'waiting', 'waiting', {'task-name': 'ready'}, 'compute-task-1661548332.4435086', 1661548332.4438608)
-    ('task-name', 'waiting', 'ready', 'ready', {'task-name': 'executing'}, 'compute-task-1661548332.4435086', 1661548332.4438777)
-    ('task-name', 'ready', 'executing', 'executing', {}, 'compute-task-1661548332.4435086', 1661548332.443889)
-    ('task-name', 'put-in-memory', 'task-finished-1661548332.4442062', 1661548332.4442852)
-    ('task-name', 'executing', 'memory', 'memory', {}, 'task-finished-1661548332.4442062', 1661548332.4443083)
-    ('free-keys', ('task-name',), 'remove-client-1661548332.5192206', 1661548332.519983)
-2022-08-26 14:12:12,525 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:12:12,525 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_utils_test.py::test_raises_with_cause PASSED
-distributed/tests/test_utils_test.py::test_check_thread_leak SKIPPED
-distributed/tests/test_utils_test.py::test_fail_hard[True] 2022-08-26 14:12:12,533 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:12,535 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:12,535 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:40863
-2022-08-26 14:12:12,535 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:34453
-2022-08-26 14:12:12,538 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:46549
-2022-08-26 14:12:12,538 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:46549
-2022-08-26 14:12:12,538 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33675
-2022-08-26 14:12:12,538 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:40863
-2022-08-26 14:12:12,538 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:12,539 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:12:12,539 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:12,539 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-8nklc7pc
-2022-08-26 14:12:12,539 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:12,541 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:46549', status: init, memory: 0, processing: 0>
-2022-08-26 14:12:12,541 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:46549
-2022-08-26 14:12:12,541 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:12,541 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:40863
-2022-08-26 14:12:12,541 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:12,542 - distributed.worker - ERROR - 
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 192, in wrapper
-    return method(self, *args, **kwargs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/tests/test_utils_test.py", line 815, in fail_sync
-    raise CustomError()
-test_utils_test.test_fail_hard.<locals>.CustomError
-2022-08-26 14:12:12,542 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:12,543 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:46549
-2022-08-26 14:12:12,543 - distributed.worker - INFO - Not waiting on executor to close
-2022-08-26 14:12:12,543 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: FailWorker-ff2ce4f7-8809-4c49-9699-21669f267624 Address tcp://127.0.0.1:46549 Status: Status.closing
-2022-08-26 14:12:12,544 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:46549', status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:12,544 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:46549
-2022-08-26 14:12:12,544 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:12:12,553 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:12:12,553 - distributed.scheduler - INFO - Scheduler closing all comms
-Failed worker tcp://127.0.0.1:46549
-PASSED
-distributed/tests/test_utils_test.py::test_fail_hard[False] 2022-08-26 14:12:12,559 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:12,560 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:12,560 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:37025
-2022-08-26 14:12:12,561 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:45775
-2022-08-26 14:12:12,563 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36883
-2022-08-26 14:12:12,563 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36883
-2022-08-26 14:12:12,563 - distributed.worker - INFO -          dashboard at:            127.0.0.1:38831
-2022-08-26 14:12:12,564 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37025
-2022-08-26 14:12:12,564 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:12,564 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:12:12,564 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:12,564 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-d8e92ijo
-2022-08-26 14:12:12,564 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:12,566 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36883', status: init, memory: 0, processing: 0>
-2022-08-26 14:12:12,566 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36883
-2022-08-26 14:12:12,566 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:12,566 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37025
-2022-08-26 14:12:12,566 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:12,567 - distributed.worker - ERROR - 
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 179, in wrapper
-    return await method(self, *args, **kwargs)  # type: ignore
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/tests/test_utils_test.py", line 819, in fail_async
-    raise CustomError()
-test_utils_test.test_fail_hard.<locals>.CustomError
-2022-08-26 14:12:12,567 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36883
-2022-08-26 14:12:12,567 - distributed.worker - INFO - Not waiting on executor to close
-2022-08-26 14:12:12,568 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:12,568 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: FailWorker-2139c175-3b83-4317-8744-80b7e1ffe33c Address tcp://127.0.0.1:36883 Status: Status.closing
-2022-08-26 14:12:12,569 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:36883', status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:12,569 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:36883
-2022-08-26 14:12:12,569 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:12:12,569 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:12:12,570 - distributed.scheduler - INFO - Scheduler closing all comms
-Failed worker tcp://127.0.0.1:36883
-PASSED
-distributed/tests/test_utils_test.py::test_popen_write_during_terminate_deadlock ------ stdout: returncode 0, ['/home/matthew/pkgsrc/install.20220728/bin/python3.10', '-c', "\nimport signal\nimport threading\n\ne = threading.Event()\n\ndef cb(signum, frame):\n    # 131072 is 2x the size of the default Linux pipe buffer\n    print('x' * 131072)\n    e.set()\n\nsignal.signal(signal.SIGINT, cb)\nprint('ready', flush=True)\ne.wait()\n"] ------
-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
-
-PASSED
-distributed/tests/test_utils_test.py::test_popen_timeout PASSED
-distributed/tests/test_utils_test.py::test_popen_always_prints_output PASSED
-distributed/tests/test_utils_test.py::test_freeze_batched_send PASSED
-distributed/tests/test_utils_test.py::test_wait_for_state 2022-08-26 14:12:13,664 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:13,666 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:13,666 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:36911
-2022-08-26 14:12:13,666 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:37711
-2022-08-26 14:12:13,669 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42333
-2022-08-26 14:12:13,669 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42333
-2022-08-26 14:12:13,669 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:12:13,669 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39359
-2022-08-26 14:12:13,669 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36911
-2022-08-26 14:12:13,669 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:13,669 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:13,669 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:13,670 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-mkbewr_i
-2022-08-26 14:12:13,670 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:13,671 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42333', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:13,672 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42333
-2022-08-26 14:12:13,672 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:13,672 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36911
-2022-08-26 14:12:13,672 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:13,672 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:13,685 - distributed.scheduler - INFO - Receive client connection: Client-c191700f-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:13,686 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:13,689 - distributed.worker - INFO - Run out-of-band function 'wait_for_state'
-2022-08-26 14:12:13,703 - distributed.worker - INFO - Run out-of-band function 'wait_for_state'
-2022-08-26 14:12:13,924 - distributed.scheduler - INFO - Remove client Client-c191700f-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:13,924 - distributed.scheduler - INFO - Remove client Client-c191700f-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:13,924 - distributed.scheduler - INFO - Close client connection: Client-c191700f-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:13,925 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42333
-2022-08-26 14:12:13,925 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:42333', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:13,925 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:42333
-2022-08-26 14:12:13,925 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:12:13,926 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-21e93d73-b4c2-4740-9845-b073b2f569f6 Address tcp://127.0.0.1:42333 Status: Status.closing
-2022-08-26 14:12:13,926 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:12:13,927 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_utils_test.py::test_wait_for_stimulus 2022-08-26 14:12:14,157 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:14,159 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:14,159 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:38941
-2022-08-26 14:12:14,159 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:33069
-2022-08-26 14:12:14,162 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35929
-2022-08-26 14:12:14,162 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35929
-2022-08-26 14:12:14,162 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:12:14,162 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42241
-2022-08-26 14:12:14,162 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:38941
-2022-08-26 14:12:14,162 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:14,162 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:14,163 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:14,163 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-2mekgxza
-2022-08-26 14:12:14,163 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:14,164 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:35929', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:14,165 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:35929
-2022-08-26 14:12:14,165 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:14,165 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:38941
-2022-08-26 14:12:14,165 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:14,165 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:14,179 - distributed.scheduler - INFO - Receive client connection: Client-c1dcb819-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:14,179 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:14,243 - distributed.worker - INFO - Run out-of-band function 'wait_for_stimulus'
-2022-08-26 14:12:14,255 - distributed.scheduler - INFO - Remove client Client-c1dcb819-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:14,255 - distributed.scheduler - INFO - Remove client Client-c1dcb819-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:14,256 - distributed.scheduler - INFO - Close client connection: Client-c1dcb819-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:14,257 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:35929
-2022-08-26 14:12:14,257 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-832c8a18-9813-486a-a38c-0673502baa31 Address tcp://127.0.0.1:35929 Status: Status.closing
-2022-08-26 14:12:14,258 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:35929', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:14,258 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:35929
-2022-08-26 14:12:14,258 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:12:14,258 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:12:14,258 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_utils_test.py::test_ws_with_running_task[executing] PASSED
-distributed/tests/test_utils_test.py::test_ws_with_running_task[long-running] PASSED
-distributed/tests/test_variable.py::test_variable 2022-08-26 14:12:14,488 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:14,490 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:14,490 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:46069
-2022-08-26 14:12:14,490 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:38225
-2022-08-26 14:12:14,495 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45529
-2022-08-26 14:12:14,495 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45529
-2022-08-26 14:12:14,495 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:12:14,495 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36727
-2022-08-26 14:12:14,495 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:46069
-2022-08-26 14:12:14,495 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:14,495 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:14,495 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:14,495 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-sot4o272
-2022-08-26 14:12:14,495 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:14,496 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36615
-2022-08-26 14:12:14,496 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36615
-2022-08-26 14:12:14,496 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:12:14,496 - distributed.worker - INFO -          dashboard at:            127.0.0.1:32881
-2022-08-26 14:12:14,496 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:46069
-2022-08-26 14:12:14,496 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:14,496 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:12:14,496 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:14,496 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-86mlqmgr
-2022-08-26 14:12:14,496 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:14,499 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45529', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:14,499 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45529
-2022-08-26 14:12:14,499 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:14,500 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36615', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:14,500 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36615
-2022-08-26 14:12:14,500 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:14,500 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:46069
-2022-08-26 14:12:14,500 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:14,501 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:46069
-2022-08-26 14:12:14,501 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:14,501 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:14,501 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:14,515 - distributed.scheduler - INFO - Receive client connection: Client-c20ffb0c-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:14,515 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:14,618 - distributed.scheduler - INFO - Remove client variable-x
-2022-08-26 14:12:14,629 - distributed.scheduler - INFO - Remove client Client-c20ffb0c-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:14,629 - distributed.scheduler - INFO - Remove client Client-c20ffb0c-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:14,629 - distributed.scheduler - INFO - Close client connection: Client-c20ffb0c-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:14,630 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:45529
-2022-08-26 14:12:14,630 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36615
-2022-08-26 14:12:14,631 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:45529', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:14,631 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:45529
-2022-08-26 14:12:14,632 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:36615', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:14,632 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:36615
-2022-08-26 14:12:14,632 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:12:14,632 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-5f849819-55bb-4fdc-8961-eeab9f0a6faa Address tcp://127.0.0.1:45529 Status: Status.closing
-2022-08-26 14:12:14,632 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-85e3abd2-e84b-411e-925e-f2cfafba0b77 Address tcp://127.0.0.1:36615 Status: Status.closing
-2022-08-26 14:12:14,633 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:12:14,633 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_variable.py::test_variable_in_task 2022-08-26 14:12:15,224 - distributed.scheduler - INFO - -----------------------------------------------
-2022-08-26 14:12:15,227 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:12:15,229 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:15,231 - distributed.scheduler - INFO - -----------------------------------------------
-2022-08-26 14:12:15,231 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:15,231 - distributed.scheduler - INFO -   Scheduler at: tcp://192.168.1.159:47705
-2022-08-26 14:12:15,231 - distributed.scheduler - INFO -   dashboard at:                     :8787
-2022-08-26 14:12:15,233 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:34599'
-2022-08-26 14:12:15,396 - distributed.scheduler - INFO - Receive client connection: Client-c2452b46-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:15,585 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:15,990 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36595
-2022-08-26 14:12:15,991 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36595
-2022-08-26 14:12:15,991 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33861
-2022-08-26 14:12:15,991 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:47705
-2022-08-26 14:12:15,991 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:15,991 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:12:15,991 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:15,991 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-1zhe4tgi
-2022-08-26 14:12:15,991 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:15,994 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36595', status: init, memory: 0, processing: 0>
-2022-08-26 14:12:15,994 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36595
-2022-08-26 14:12:15,994 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:15,994 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:47705
-2022-08-26 14:12:15,994 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:15,995 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:16,098 - distributed.scheduler - INFO - Receive client connection: Client-worker-c30161ea-2583-11ed-bd88-00d861bc4509
-2022-08-26 14:12:16,099 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:16,114 - distributed.scheduler - INFO - Remove client Client-c2452b46-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:16,114 - distributed.scheduler - INFO - Remove client Client-c2452b46-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:16,114 - distributed.scheduler - INFO - Close client connection: Client-c2452b46-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:16,115 - distributed._signals - INFO - Received signal SIGINT (2)
-2022-08-26 14:12:16,115 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:34599'.
-2022-08-26 14:12:16,115 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:12:16,116 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36595
-2022-08-26 14:12:16,116 - distributed.scheduler - INFO - Remove client Client-worker-c30161ea-2583-11ed-bd88-00d861bc4509
-2022-08-26 14:12:16,116 - distributed.scheduler - INFO - Remove client Client-worker-c30161ea-2583-11ed-bd88-00d861bc4509
-2022-08-26 14:12:16,116 - distributed.scheduler - INFO - Close client connection: Client-worker-c30161ea-2583-11ed-bd88-00d861bc4509
-2022-08-26 14:12:16,117 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-de7e834f-2a63-4ea6-95a2-e10f5e5046ae Address tcp://127.0.0.1:36595 Status: Status.closing
-2022-08-26 14:12:16,117 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:36595', status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:16,117 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:36595
-2022-08-26 14:12:16,117 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:12:16,305 - distributed.dask_worker - INFO - End worker
-2022-08-26 14:12:16,379 - distributed._signals - INFO - Received signal SIGINT (2)
-2022-08-26 14:12:16,379 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:12:16,379 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:12:16,379 - distributed.scheduler - INFO - Stopped scheduler at 'tcp://192.168.1.159:47705'
-2022-08-26 14:12:16,379 - distributed.scheduler - INFO - End scheduler
-PASSED
-distributed/tests/test_variable.py::test_delete_unset_variable 2022-08-26 14:12:16,548 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:16,550 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:16,550 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:45845
-2022-08-26 14:12:16,550 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:41903
-2022-08-26 14:12:16,555 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:46071
-2022-08-26 14:12:16,555 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:46071
-2022-08-26 14:12:16,555 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:12:16,555 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35563
-2022-08-26 14:12:16,555 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:45845
-2022-08-26 14:12:16,555 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:16,555 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:16,555 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:16,555 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-xmr4423j
-2022-08-26 14:12:16,555 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:16,556 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34737
-2022-08-26 14:12:16,556 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34737
-2022-08-26 14:12:16,556 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:12:16,556 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40419
-2022-08-26 14:12:16,556 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:45845
-2022-08-26 14:12:16,556 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:16,556 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:12:16,556 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:16,556 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-huapfn1f
-2022-08-26 14:12:16,556 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:16,559 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:46071', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:16,559 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:46071
-2022-08-26 14:12:16,559 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:16,560 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:34737', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:16,560 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:34737
-2022-08-26 14:12:16,560 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:16,560 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:45845
-2022-08-26 14:12:16,560 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:16,560 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:45845
-2022-08-26 14:12:16,560 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:16,561 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:16,561 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:16,574 - distributed.scheduler - INFO - Receive client connection: Client-c34a4521-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:16,575 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:16,586 - distributed.scheduler - INFO - Remove client variable-variable-cb9f9c9c30f14f769e34a09ef28732c7
-2022-08-26 14:12:16,586 - distributed.scheduler - INFO - Remove client Client-c34a4521-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:16,586 - distributed.scheduler - INFO - Remove client Client-c34a4521-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:16,587 - distributed.scheduler - INFO - Close client connection: Client-c34a4521-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:16,587 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:46071
-2022-08-26 14:12:16,587 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:34737
-2022-08-26 14:12:16,588 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:46071', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:16,588 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:46071
-2022-08-26 14:12:16,589 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:34737', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:16,589 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:34737
-2022-08-26 14:12:16,589 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:12:16,589 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-be7e2c51-064f-4a11-89f5-5f52208c45c1 Address tcp://127.0.0.1:46071 Status: Status.closing
-2022-08-26 14:12:16,589 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-3aaa9219-3b3e-4482-838c-e0e86d824221 Address tcp://127.0.0.1:34737 Status: Status.closing
-2022-08-26 14:12:16,590 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:12:16,590 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_variable.py::test_queue_with_data 2022-08-26 14:12:16,818 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:16,820 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:16,820 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:39777
-2022-08-26 14:12:16,820 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:35631
-2022-08-26 14:12:16,825 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37867
-2022-08-26 14:12:16,825 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37867
-2022-08-26 14:12:16,825 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:12:16,825 - distributed.worker - INFO -          dashboard at:            127.0.0.1:44857
-2022-08-26 14:12:16,825 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39777
-2022-08-26 14:12:16,825 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:16,825 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:16,825 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:16,825 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-_gzjgpuj
-2022-08-26 14:12:16,825 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:16,826 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:46065
-2022-08-26 14:12:16,826 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:46065
-2022-08-26 14:12:16,826 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:12:16,826 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45865
-2022-08-26 14:12:16,826 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39777
-2022-08-26 14:12:16,826 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:16,826 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:12:16,826 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:16,826 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-bvi959qx
-2022-08-26 14:12:16,826 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:16,829 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37867', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:16,829 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37867
-2022-08-26 14:12:16,829 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:16,830 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:46065', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:16,830 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:46065
-2022-08-26 14:12:16,830 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:16,830 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39777
-2022-08-26 14:12:16,830 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:16,831 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39777
-2022-08-26 14:12:16,831 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:16,831 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:16,831 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:16,845 - distributed.scheduler - INFO - Receive client connection: Client-c3738082-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:16,845 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:16,856 - distributed.scheduler - INFO - Remove client Client-c3738082-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:16,856 - distributed.scheduler - INFO - Remove client Client-c3738082-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:16,856 - distributed.scheduler - INFO - Close client connection: Client-c3738082-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:16,857 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:37867
-2022-08-26 14:12:16,857 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:46065
-2022-08-26 14:12:16,858 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:37867', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:16,858 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:37867
-2022-08-26 14:12:16,858 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:46065', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:16,858 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:46065
-2022-08-26 14:12:16,858 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:12:16,858 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-97afbfe2-2d78-4be7-9220-805a86b78271 Address tcp://127.0.0.1:37867 Status: Status.closing
-2022-08-26 14:12:16,859 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-8c63c56f-50fc-4543-8225-5f94a0b1b4ad Address tcp://127.0.0.1:46065 Status: Status.closing
-2022-08-26 14:12:16,859 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:12:16,860 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_variable.py::test_sync 2022-08-26 14:12:18,041 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:12:18,043 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:18,047 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:18,047 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:43629
-2022-08-26 14:12:18,047 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:12:18,057 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34509
-2022-08-26 14:12:18,057 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34509
-2022-08-26 14:12:18,057 - distributed.worker - INFO -          dashboard at:            127.0.0.1:37853
-2022-08-26 14:12:18,057 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43629
-2022-08-26 14:12:18,057 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:18,057 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:18,057 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:18,057 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ski0jzpt
-2022-08-26 14:12:18,057 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:18,092 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36793
-2022-08-26 14:12:18,092 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36793
-2022-08-26 14:12:18,092 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42127
-2022-08-26 14:12:18,092 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43629
-2022-08-26 14:12:18,092 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:18,093 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:18,093 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:18,093 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-woktza9f
-2022-08-26 14:12:18,093 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:18,361 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:34509', status: init, memory: 0, processing: 0>
-2022-08-26 14:12:18,642 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:34509
-2022-08-26 14:12:18,642 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:18,642 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43629
-2022-08-26 14:12:18,642 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:18,643 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36793', status: init, memory: 0, processing: 0>
-2022-08-26 14:12:18,643 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:18,643 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36793
-2022-08-26 14:12:18,643 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:18,643 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43629
-2022-08-26 14:12:18,644 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:18,644 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:18,650 - distributed.scheduler - INFO - Receive client connection: Client-c486d226-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:18,650 - distributed.core - INFO - Starting established connection
-PASSED2022-08-26 14:12:18,672 - distributed.scheduler - INFO - Remove client Client-c486d226-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:18,672 - distributed.scheduler - INFO - Remove client Client-c486d226-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:18,672 - distributed.scheduler - INFO - Close client connection: Client-c486d226-2583-11ed-a99d-00d861bc4509
-
-distributed/tests/test_variable.py::test_hold_futures 2022-08-26 14:12:18,685 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:18,686 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:18,687 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:38479
-2022-08-26 14:12:18,687 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:46317
-2022-08-26 14:12:18,687 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-woktza9f', purging
-2022-08-26 14:12:18,687 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-ski0jzpt', purging
-2022-08-26 14:12:18,692 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:44509
-2022-08-26 14:12:18,692 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:44509
-2022-08-26 14:12:18,692 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:12:18,692 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39583
-2022-08-26 14:12:18,692 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:38479
-2022-08-26 14:12:18,692 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:18,692 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:18,692 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:18,692 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-c4lq6afj
-2022-08-26 14:12:18,692 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:18,693 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34773
-2022-08-26 14:12:18,693 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34773
-2022-08-26 14:12:18,693 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:12:18,693 - distributed.worker - INFO -          dashboard at:            127.0.0.1:38565
-2022-08-26 14:12:18,693 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:38479
-2022-08-26 14:12:18,693 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:18,693 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:12:18,693 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:18,693 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-wqxxk6qt
-2022-08-26 14:12:18,693 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:18,696 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:44509', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:18,696 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:44509
-2022-08-26 14:12:18,697 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:18,697 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:34773', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:18,697 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:34773
-2022-08-26 14:12:18,697 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:18,697 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:38479
-2022-08-26 14:12:18,698 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:18,698 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:38479
-2022-08-26 14:12:18,698 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:18,698 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:18,698 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:18,712 - distributed.scheduler - INFO - Receive client connection: Client-c490683b-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:18,712 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:18,725 - distributed.scheduler - INFO - Remove client Client-c490683b-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:18,725 - distributed.scheduler - INFO - Remove client Client-c490683b-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:18,726 - distributed.scheduler - INFO - Close client connection: Client-c490683b-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:18,831 - distributed.scheduler - INFO - Receive client connection: Client-c4a286b9-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:18,831 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:18,843 - distributed.scheduler - INFO - Remove client Client-c4a286b9-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:18,843 - distributed.scheduler - INFO - Remove client Client-c4a286b9-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:18,843 - distributed.scheduler - INFO - Close client connection: Client-c4a286b9-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:18,844 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:44509
-2022-08-26 14:12:18,844 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:34773
-2022-08-26 14:12:18,845 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:44509', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:18,845 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:44509
-2022-08-26 14:12:18,845 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:34773', name: 1, status: closing, memory: 1, processing: 0>
-2022-08-26 14:12:18,845 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:34773
-2022-08-26 14:12:18,845 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:12:18,845 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-5f64f099-9f1f-4a60-834d-05033d7d80f6 Address tcp://127.0.0.1:44509 Status: Status.closing
-2022-08-26 14:12:18,846 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-3553a191-2240-47af-9cda-7471654b5c3e Address tcp://127.0.0.1:34773 Status: Status.closing
-2022-08-26 14:12:18,847 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:12:18,847 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_variable.py::test_timeout 2022-08-26 14:12:19,076 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:19,078 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:19,078 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:39425
-2022-08-26 14:12:19,078 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:46005
-2022-08-26 14:12:19,083 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39981
-2022-08-26 14:12:19,083 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39981
-2022-08-26 14:12:19,083 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:12:19,083 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35569
-2022-08-26 14:12:19,083 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39425
-2022-08-26 14:12:19,083 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:19,083 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:19,083 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:19,083 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-lxape6_e
-2022-08-26 14:12:19,083 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:19,084 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34189
-2022-08-26 14:12:19,084 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34189
-2022-08-26 14:12:19,084 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:12:19,084 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34379
-2022-08-26 14:12:19,084 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39425
-2022-08-26 14:12:19,084 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:19,084 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:12:19,084 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:19,084 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-r9an9ipr
-2022-08-26 14:12:19,084 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:19,087 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39981', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:19,087 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39981
-2022-08-26 14:12:19,087 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:19,088 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:34189', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:19,088 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:34189
-2022-08-26 14:12:19,088 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:19,088 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39425
-2022-08-26 14:12:19,089 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:19,089 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39425
-2022-08-26 14:12:19,089 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:19,089 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:19,089 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:19,103 - distributed.scheduler - INFO - Receive client connection: Client-c4cc10f5-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:19,103 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:19,304 - distributed.core - ERROR - Exception while handling op variable_get
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/variable.py", line 90, in _
-    await self.started.wait()
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/locks.py", line 269, in wait
-    await fut
-asyncio.exceptions.CancelledError
-
-During handling of the above exception, another exception occurred:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/tasks.py", line 456, in wait_for
-    return fut.result()
-asyncio.exceptions.CancelledError
-
-The above exception was the direct cause of the following exception:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 770, in _handle_comm
-    result = await result
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/variable.py", line 92, in get
-    await asyncio.wait_for(_(), timeout=left)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/tasks.py", line 458, in wait_for
-    raise exceptions.TimeoutError() from exc
-asyncio.exceptions.TimeoutError
-2022-08-26 14:12:19,316 - distributed.core - ERROR - Exception while handling op variable_get
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/variable.py", line 90, in _
-    await self.started.wait()
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/locks.py", line 269, in wait
-    await fut
-asyncio.exceptions.CancelledError
-
-During handling of the above exception, another exception occurred:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/tasks.py", line 456, in wait_for
-    return fut.result()
-asyncio.exceptions.CancelledError
-
-The above exception was the direct cause of the following exception:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 770, in _handle_comm
-    result = await result
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/variable.py", line 92, in get
-    await asyncio.wait_for(_(), timeout=left)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/tasks.py", line 458, in wait_for
-    raise exceptions.TimeoutError() from exc
-asyncio.exceptions.TimeoutError
-2022-08-26 14:12:19,318 - distributed.scheduler - INFO - Remove client Client-c4cc10f5-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:19,318 - distributed.scheduler - INFO - Remove client Client-c4cc10f5-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:19,318 - distributed.scheduler - INFO - Close client connection: Client-c4cc10f5-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:19,318 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:39981
-2022-08-26 14:12:19,319 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:34189
-2022-08-26 14:12:19,320 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:39981', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:19,320 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:39981
-2022-08-26 14:12:19,320 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:34189', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:19,320 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:34189
-2022-08-26 14:12:19,320 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:12:19,320 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-378bd7c4-2e14-4331-b1d2-0868a8912fe4 Address tcp://127.0.0.1:39981 Status: Status.closing
-2022-08-26 14:12:19,321 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-3f8a59e4-0c8c-4262-9168-656ed712c10d Address tcp://127.0.0.1:34189 Status: Status.closing
-2022-08-26 14:12:19,321 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:12:19,321 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_variable.py::test_timeout_sync 2022-08-26 14:12:20,501 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:12:20,504 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:20,507 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:20,507 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:44711
-2022-08-26 14:12:20,507 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:12:20,535 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:41159
-2022-08-26 14:12:20,535 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:41159
-2022-08-26 14:12:20,535 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39007
-2022-08-26 14:12:20,535 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44711
-2022-08-26 14:12:20,535 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:20,535 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:20,535 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:20,535 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-6lfcjchr
-2022-08-26 14:12:20,535 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:20,572 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45535
-2022-08-26 14:12:20,573 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45535
-2022-08-26 14:12:20,573 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46757
-2022-08-26 14:12:20,573 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44711
-2022-08-26 14:12:20,573 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:20,573 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:20,573 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:20,573 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-6tq_ou5o
-2022-08-26 14:12:20,573 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:20,840 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:41159', status: init, memory: 0, processing: 0>
-2022-08-26 14:12:21,121 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:41159
-2022-08-26 14:12:21,121 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:21,121 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44711
-2022-08-26 14:12:21,121 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:21,122 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45535', status: init, memory: 0, processing: 0>
-2022-08-26 14:12:21,122 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45535
-2022-08-26 14:12:21,122 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:21,122 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:21,122 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44711
-2022-08-26 14:12:21,123 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:21,123 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:21,128 - distributed.scheduler - INFO - Receive client connection: Client-c6011755-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:21,128 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:21,331 - distributed.core - ERROR - Exception while handling op variable_get
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/variable.py", line 90, in _
-    await self.started.wait()
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/locks.py", line 269, in wait
-    await fut
-asyncio.exceptions.CancelledError
-
-During handling of the above exception, another exception occurred:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/tasks.py", line 456, in wait_for
-    return fut.result()
-asyncio.exceptions.CancelledError
-
-The above exception was the direct cause of the following exception:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 770, in _handle_comm
-    result = await result
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/variable.py", line 92, in get
-    await asyncio.wait_for(_(), timeout=left)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/tasks.py", line 458, in wait_for
-    raise exceptions.TimeoutError() from exc
-asyncio.exceptions.TimeoutError
-2022-08-26 14:12:21,412 - distributed.core - ERROR - Exception while handling op variable_get
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/variable.py", line 90, in _
-    await self.started.wait()
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/locks.py", line 269, in wait
-    await fut
-asyncio.exceptions.CancelledError
-
-During handling of the above exception, another exception occurred:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/tasks.py", line 456, in wait_for
-    return fut.result()
-asyncio.exceptions.CancelledError
-
-The above exception was the direct cause of the following exception:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 770, in _handle_comm
-    result = await result
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/variable.py", line 92, in get
-    await asyncio.wait_for(_(), timeout=left)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/tasks.py", line 458, in wait_for
-    raise exceptions.TimeoutError() from exc
-asyncio.exceptions.TimeoutError
-PASSED2022-08-26 14:12:21,414 - distributed.scheduler - INFO - Remove client Client-c6011755-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:21,415 - distributed.scheduler - INFO - Remove client Client-c6011755-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:21,415 - distributed.scheduler - INFO - Close client connection: Client-c6011755-2583-11ed-a99d-00d861bc4509
-
-distributed/tests/test_variable.py::test_cleanup 2022-08-26 14:12:21,426 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:21,428 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:21,428 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:35727
-2022-08-26 14:12:21,428 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:36979
-2022-08-26 14:12:21,429 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-6lfcjchr', purging
-2022-08-26 14:12:21,429 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-6tq_ou5o', purging
-2022-08-26 14:12:21,433 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:32819
-2022-08-26 14:12:21,433 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:32819
-2022-08-26 14:12:21,433 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:12:21,433 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33319
-2022-08-26 14:12:21,433 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35727
-2022-08-26 14:12:21,433 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:21,434 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:21,434 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:21,434 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-q6glfaxk
-2022-08-26 14:12:21,434 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:21,434 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38437
-2022-08-26 14:12:21,434 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38437
-2022-08-26 14:12:21,434 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:12:21,434 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42179
-2022-08-26 14:12:21,434 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35727
-2022-08-26 14:12:21,435 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:21,435 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:12:21,435 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:21,435 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-y7mf026k
-2022-08-26 14:12:21,435 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:21,438 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:32819', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:21,438 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:32819
-2022-08-26 14:12:21,438 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:21,438 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38437', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:21,439 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38437
-2022-08-26 14:12:21,439 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:21,439 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35727
-2022-08-26 14:12:21,439 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:21,439 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35727
-2022-08-26 14:12:21,439 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:21,440 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:21,440 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:21,453 - distributed.scheduler - INFO - Receive client connection: Client-c632b923-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:21,454 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:21,569 - distributed.scheduler - INFO - Remove client Client-c632b923-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:21,569 - distributed.scheduler - INFO - Remove client Client-c632b923-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:21,570 - distributed.scheduler - INFO - Close client connection: Client-c632b923-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:21,571 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:32819
-2022-08-26 14:12:21,571 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:38437
-2022-08-26 14:12:21,572 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:38437', name: 1, status: closing, memory: 1, processing: 0>
-2022-08-26 14:12:21,572 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:38437
-2022-08-26 14:12:21,572 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-5b56a9e1-2003-4dd4-9079-4d838a870255 Address tcp://127.0.0.1:38437 Status: Status.closing
-2022-08-26 14:12:21,572 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-4f4ad95a-3762-4975-abd7-9242833d84b4 Address tcp://127.0.0.1:32819 Status: Status.closing
-2022-08-26 14:12:21,573 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:32819', name: 0, status: closing, memory: 0, processing: 1>
-2022-08-26 14:12:21,573 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:32819
-2022-08-26 14:12:21,573 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:12:21,574 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:12:21,574 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_variable.py::test_pickleable 2022-08-26 14:12:22,758 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:12:22,760 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:22,763 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:22,763 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:34069
-2022-08-26 14:12:22,763 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:12:22,777 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43507
-2022-08-26 14:12:22,777 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43507
-2022-08-26 14:12:22,777 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45687
-2022-08-26 14:12:22,777 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34069
-2022-08-26 14:12:22,777 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:22,777 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:22,777 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:22,777 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-oc2vm8lg
-2022-08-26 14:12:22,777 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:22,820 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35117
-2022-08-26 14:12:22,820 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35117
-2022-08-26 14:12:22,820 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41185
-2022-08-26 14:12:22,820 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34069
-2022-08-26 14:12:22,820 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:22,820 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:22,820 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:22,820 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-osl282su
-2022-08-26 14:12:22,820 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:23,082 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:43507', status: init, memory: 0, processing: 0>
-2022-08-26 14:12:23,361 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:43507
-2022-08-26 14:12:23,361 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:23,361 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34069
-2022-08-26 14:12:23,361 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:23,362 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:35117', status: init, memory: 0, processing: 0>
-2022-08-26 14:12:23,362 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:35117
-2022-08-26 14:12:23,362 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:23,362 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:23,362 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34069
-2022-08-26 14:12:23,362 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:23,363 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:23,369 - distributed.scheduler - INFO - Receive client connection: Client-c756e240-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:23,369 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:23,387 - distributed.scheduler - INFO - Receive client connection: Client-worker-c7598005-2583-11ed-be72-00d861bc4509
-2022-08-26 14:12:23,387 - distributed.core - INFO - Starting established connection
-PASSED2022-08-26 14:12:23,392 - distributed.scheduler - INFO - Remove client Client-c756e240-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:23,392 - distributed.scheduler - INFO - Remove client Client-c756e240-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:23,392 - distributed.scheduler - INFO - Close client connection: Client-c756e240-2583-11ed-a99d-00d861bc4509
-
-distributed/tests/test_variable.py::test_timeout_get 2022-08-26 14:12:23,405 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:23,407 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:23,407 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:36535
-2022-08-26 14:12:23,407 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:42169
-2022-08-26 14:12:23,408 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-oc2vm8lg', purging
-2022-08-26 14:12:23,408 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-osl282su', purging
-2022-08-26 14:12:23,412 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34617
-2022-08-26 14:12:23,412 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34617
-2022-08-26 14:12:23,412 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:12:23,412 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33193
-2022-08-26 14:12:23,412 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36535
-2022-08-26 14:12:23,413 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:23,413 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:23,413 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:23,413 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-3htq68cy
-2022-08-26 14:12:23,413 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:23,413 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34789
-2022-08-26 14:12:23,413 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34789
-2022-08-26 14:12:23,413 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:12:23,413 - distributed.worker - INFO -          dashboard at:            127.0.0.1:38391
-2022-08-26 14:12:23,413 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36535
-2022-08-26 14:12:23,413 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:23,413 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:12:23,414 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:23,414 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-0s52pkif
-2022-08-26 14:12:23,414 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:23,417 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:34617', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:23,417 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:34617
-2022-08-26 14:12:23,417 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:23,417 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:34789', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:23,418 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:34789
-2022-08-26 14:12:23,418 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:23,418 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36535
-2022-08-26 14:12:23,418 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:23,418 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36535
-2022-08-26 14:12:23,418 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:23,418 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:23,418 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:23,432 - distributed.scheduler - INFO - Receive client connection: Client-c760aa60-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:23,432 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:23,443 - distributed.scheduler - INFO - Remove client Client-c760aa60-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:23,444 - distributed.scheduler - INFO - Remove client Client-c760aa60-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:23,444 - distributed.scheduler - INFO - Close client connection: Client-c760aa60-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:23,444 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:34617
-2022-08-26 14:12:23,444 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:34789
-2022-08-26 14:12:23,445 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:34617', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:23,445 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:34617
-2022-08-26 14:12:23,446 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:34789', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:23,446 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:34789
-2022-08-26 14:12:23,446 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:12:23,446 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-9e337eab-46d4-4a57-9f4f-e7eb63ddd0e9 Address tcp://127.0.0.1:34617 Status: Status.closing
-2022-08-26 14:12:23,446 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-2182544a-f5f4-4c1a-9032-6fd63b2442da Address tcp://127.0.0.1:34789 Status: Status.closing
-2022-08-26 14:12:23,447 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:12:23,447 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_variable.py::test_race SKIPPED (need --runslo...)
-distributed/tests/test_variable.py::test_Future_knows_status_immediately 2022-08-26 14:12:23,680 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:23,682 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:23,682 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:34793
-2022-08-26 14:12:23,682 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:39465
-2022-08-26 14:12:23,686 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42001
-2022-08-26 14:12:23,686 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42001
-2022-08-26 14:12:23,686 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:12:23,686 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42299
-2022-08-26 14:12:23,686 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34793
-2022-08-26 14:12:23,687 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:23,687 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:23,687 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:23,687 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-4uqtjmd2
-2022-08-26 14:12:23,687 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:23,687 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:41629
-2022-08-26 14:12:23,687 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:41629
-2022-08-26 14:12:23,687 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:12:23,687 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35461
-2022-08-26 14:12:23,687 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34793
-2022-08-26 14:12:23,687 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:23,688 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:12:23,688 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:23,688 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-_tvqrv33
-2022-08-26 14:12:23,688 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:23,691 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42001', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:23,691 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42001
-2022-08-26 14:12:23,691 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:23,691 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:41629', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:23,692 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:41629
-2022-08-26 14:12:23,692 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:23,692 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34793
-2022-08-26 14:12:23,692 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:23,692 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34793
-2022-08-26 14:12:23,692 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:23,692 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:23,693 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:23,706 - distributed.scheduler - INFO - Receive client connection: Client-c78a7bb0-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:23,706 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:23,713 - distributed.scheduler - INFO - Receive client connection: Client-c78b86c9-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:23,713 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:23,720 - distributed.worker - WARNING - Compute Failed
-Key:       div-beaac0206246b34d3625d21194e03c13
-Function:  div
-args:      (1, 0)
-kwargs:    {}
-Exception: "ZeroDivisionError('division by zero')"
-
-2022-08-26 14:12:23,725 - distributed.scheduler - INFO - Remove client Client-c78b86c9-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:23,725 - distributed.scheduler - INFO - Remove client Client-c78b86c9-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:23,726 - distributed.scheduler - INFO - Close client connection: Client-c78b86c9-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:23,729 - distributed.scheduler - INFO - Remove client Client-c78a7bb0-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:23,729 - distributed.scheduler - INFO - Remove client Client-c78a7bb0-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:23,729 - distributed.scheduler - INFO - Close client connection: Client-c78a7bb0-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:23,729 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42001
-2022-08-26 14:12:23,730 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:41629
-2022-08-26 14:12:23,731 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:42001', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:23,731 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:42001
-2022-08-26 14:12:23,731 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:41629', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:23,731 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:41629
-2022-08-26 14:12:23,731 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:12:23,731 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-4bbfa6c8-6678-49bb-a4bc-52e227ac24f5 Address tcp://127.0.0.1:42001 Status: Status.closing
-2022-08-26 14:12:23,731 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-c7d0e4a2-bc2b-4d4a-ab1c-7ba595978f4c Address tcp://127.0.0.1:41629 Status: Status.closing
-2022-08-26 14:12:23,732 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:12:23,733 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_variable.py::test_erred_future 2022-08-26 14:12:23,962 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:23,963 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:23,964 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:33581
-2022-08-26 14:12:23,964 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:40387
-2022-08-26 14:12:23,968 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36291
-2022-08-26 14:12:23,968 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36291
-2022-08-26 14:12:23,968 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:12:23,968 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45231
-2022-08-26 14:12:23,968 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33581
-2022-08-26 14:12:23,968 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:23,968 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:23,969 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:23,969 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-o3laqtry
-2022-08-26 14:12:23,969 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:23,969 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43625
-2022-08-26 14:12:23,969 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43625
-2022-08-26 14:12:23,969 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:12:23,969 - distributed.worker - INFO -          dashboard at:            127.0.0.1:44055
-2022-08-26 14:12:23,969 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33581
-2022-08-26 14:12:23,969 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:23,970 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:12:23,970 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:23,970 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-py_73esd
-2022-08-26 14:12:23,970 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:23,972 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36291', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:23,973 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36291
-2022-08-26 14:12:23,973 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:23,973 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:43625', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:23,973 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:43625
-2022-08-26 14:12:23,974 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:23,974 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33581
-2022-08-26 14:12:23,974 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:23,974 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33581
-2022-08-26 14:12:23,974 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:23,974 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:23,975 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:23,988 - distributed.scheduler - INFO - Receive client connection: Client-c7b582cd-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:23,988 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:24,001 - distributed.worker - WARNING - Compute Failed
-Key:       div-beaac0206246b34d3625d21194e03c13
-Function:  div
-args:      (1, 0)
-kwargs:    {}
-Exception: "ZeroDivisionError('division by zero')"
-
-2022-08-26 14:12:24,103 - distributed.scheduler - INFO - Remove client Client-c7b582cd-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:24,103 - distributed.scheduler - INFO - Remove client Client-c7b582cd-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:24,104 - distributed.scheduler - INFO - Close client connection: Client-c7b582cd-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:24,104 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36291
-2022-08-26 14:12:24,104 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:43625
-2022-08-26 14:12:24,105 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:36291', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:24,105 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:36291
-2022-08-26 14:12:24,105 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:43625', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:24,106 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:43625
-2022-08-26 14:12:24,106 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:12:24,106 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-c8ad360b-2b44-4faa-b95e-651420cd0f70 Address tcp://127.0.0.1:36291 Status: Status.closing
-2022-08-26 14:12:24,106 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-02de355f-ac64-4038-9b23-99728e506e9d Address tcp://127.0.0.1:43625 Status: Status.closing
-2022-08-26 14:12:24,107 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:12:24,107 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_variable.py::test_future_erred_sync 2022-08-26 14:12:25,291 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:12:25,293 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:25,296 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:25,297 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:45907
-2022-08-26 14:12:25,297 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:12:25,306 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39407
-2022-08-26 14:12:25,307 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39407
-2022-08-26 14:12:25,307 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33897
-2022-08-26 14:12:25,307 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:45907
-2022-08-26 14:12:25,307 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:25,307 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:25,307 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:25,307 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-wer3gs16
-2022-08-26 14:12:25,307 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:25,358 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45421
-2022-08-26 14:12:25,358 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45421
-2022-08-26 14:12:25,358 - distributed.worker - INFO -          dashboard at:            127.0.0.1:44539
-2022-08-26 14:12:25,358 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:45907
-2022-08-26 14:12:25,358 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:25,358 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:25,358 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:25,358 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ivmceah2
-2022-08-26 14:12:25,358 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:25,610 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39407', status: init, memory: 0, processing: 0>
-2022-08-26 14:12:25,890 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39407
-2022-08-26 14:12:25,890 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:25,890 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:45907
-2022-08-26 14:12:25,890 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:25,890 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45421', status: init, memory: 0, processing: 0>
-2022-08-26 14:12:25,891 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45421
-2022-08-26 14:12:25,891 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:25,891 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:25,891 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:45907
-2022-08-26 14:12:25,891 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:25,892 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:25,897 - distributed.scheduler - INFO - Receive client connection: Client-c8d8c2a5-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:25,897 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:25,991 - distributed.worker - WARNING - Compute Failed
-Key:       div-beaac0206246b34d3625d21194e03c13
-Function:  div
-args:      (1, 0)
-kwargs:    {}
-Exception: "ZeroDivisionError('division by zero')"
-
-PASSED2022-08-26 14:12:26,012 - distributed.scheduler - INFO - Remove client Client-c8d8c2a5-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:26,012 - distributed.scheduler - INFO - Remove client Client-c8d8c2a5-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:26,012 - distributed.scheduler - INFO - Close client connection: Client-c8d8c2a5-2583-11ed-a99d-00d861bc4509
-
-distributed/tests/test_variable.py::test_variables_do_not_leak_client 2022-08-26 14:12:26,026 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:26,028 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:26,028 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:44815
-2022-08-26 14:12:26,028 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:36007
-2022-08-26 14:12:26,029 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-wer3gs16', purging
-2022-08-26 14:12:26,029 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-ivmceah2', purging
-2022-08-26 14:12:26,033 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33515
-2022-08-26 14:12:26,033 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33515
-2022-08-26 14:12:26,033 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:12:26,033 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34503
-2022-08-26 14:12:26,033 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44815
-2022-08-26 14:12:26,033 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:26,033 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:26,033 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:26,033 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-r9iyo4z7
-2022-08-26 14:12:26,034 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:26,034 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38035
-2022-08-26 14:12:26,034 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38035
-2022-08-26 14:12:26,034 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:12:26,034 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33225
-2022-08-26 14:12:26,034 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44815
-2022-08-26 14:12:26,034 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:26,034 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:12:26,034 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:26,034 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-1n7gg6se
-2022-08-26 14:12:26,034 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:26,037 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33515', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:26,038 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33515
-2022-08-26 14:12:26,038 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:26,038 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38035', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:26,038 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38035
-2022-08-26 14:12:26,038 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:26,039 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44815
-2022-08-26 14:12:26,039 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:26,039 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44815
-2022-08-26 14:12:26,039 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:26,039 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:26,039 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:26,053 - distributed.scheduler - INFO - Receive client connection: Client-c8f08f24-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:26,053 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:26,065 - distributed.scheduler - INFO - Remove client variable-x
-2022-08-26 14:12:26,075 - distributed.scheduler - INFO - Remove client Client-c8f08f24-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:26,075 - distributed.scheduler - INFO - Remove client Client-c8f08f24-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:26,075 - distributed.scheduler - INFO - Close client connection: Client-c8f08f24-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:26,076 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:33515
-2022-08-26 14:12:26,076 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:38035
-2022-08-26 14:12:26,077 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:38035', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:26,077 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:38035
-2022-08-26 14:12:26,078 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-494ac5c7-1a9f-47ca-9330-a426804bc997 Address tcp://127.0.0.1:38035 Status: Status.closing
-2022-08-26 14:12:26,078 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-5890c723-1305-42bf-8dcf-c5f001389c88 Address tcp://127.0.0.1:33515 Status: Status.closing
-2022-08-26 14:12:26,078 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:33515', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:26,078 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:33515
-2022-08-26 14:12:26,078 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:12:26,079 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:12:26,079 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_versions.py::test_versions_match PASSED
-distributed/tests/test_versions.py::test_version_mismatch[source-MISMATCHED] PASSED
-distributed/tests/test_versions.py::test_version_mismatch[source-MISSING] PASSED
-distributed/tests/test_versions.py::test_version_mismatch[source-KEY_ERROR] PASSED
-distributed/tests/test_versions.py::test_version_mismatch[source-NONE] PASSED
-distributed/tests/test_versions.py::test_version_mismatch[scheduler-MISMATCHED] PASSED
-distributed/tests/test_versions.py::test_version_mismatch[scheduler-MISSING] PASSED
-distributed/tests/test_versions.py::test_version_mismatch[scheduler-KEY_ERROR] PASSED
-distributed/tests/test_versions.py::test_version_mismatch[scheduler-NONE] PASSED
-distributed/tests/test_versions.py::test_version_mismatch[worker-1-MISMATCHED] PASSED
-distributed/tests/test_versions.py::test_version_mismatch[worker-1-MISSING] PASSED
-distributed/tests/test_versions.py::test_version_mismatch[worker-1-KEY_ERROR] PASSED
-distributed/tests/test_versions.py::test_version_mismatch[worker-1-NONE] PASSED
-distributed/tests/test_versions.py::test_scheduler_mismatched_irrelevant_package PASSED
-distributed/tests/test_versions.py::test_scheduler_additional_irrelevant_package PASSED
-distributed/tests/test_versions.py::test_python_mismatch PASSED
-distributed/tests/test_versions.py::test_version_warning_in_cluster 2022-08-26 14:12:26,337 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:26,338 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:26,339 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:33099
-2022-08-26 14:12:26,339 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:45485
-2022-08-26 14:12:26,343 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38109
-2022-08-26 14:12:26,343 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38109
-2022-08-26 14:12:26,343 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:12:26,343 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34831
-2022-08-26 14:12:26,344 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33099
-2022-08-26 14:12:26,344 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:26,344 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:26,344 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:26,344 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-eah5pycv
-2022-08-26 14:12:26,344 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:26,344 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40775
-2022-08-26 14:12:26,344 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40775
-2022-08-26 14:12:26,344 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:12:26,345 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46331
-2022-08-26 14:12:26,345 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33099
-2022-08-26 14:12:26,345 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:26,345 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:12:26,345 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:26,345 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-himb9ta2
-2022-08-26 14:12:26,345 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:26,348 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38109', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:26,348 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38109
-2022-08-26 14:12:26,348 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:26,349 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40775', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:26,349 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40775
-2022-08-26 14:12:26,349 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:26,349 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33099
-2022-08-26 14:12:26,349 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:26,349 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33099
-2022-08-26 14:12:26,349 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:26,350 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:26,350 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:26,363 - distributed.scheduler - INFO - Receive client connection: Client-c91ff6ca-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:26,364 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:26,375 - distributed.scheduler - INFO - Remove client Client-c91ff6ca-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:26,375 - distributed.scheduler - INFO - Remove client Client-c91ff6ca-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:26,375 - distributed.scheduler - INFO - Close client connection: Client-c91ff6ca-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:26,378 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39551
-2022-08-26 14:12:26,378 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39551
-2022-08-26 14:12:26,378 - distributed.worker - INFO -          dashboard at:            127.0.0.1:44487
-2022-08-26 14:12:26,378 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33099
-2022-08-26 14:12:26,378 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:26,378 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:12:26,378 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:26,379 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-aw_xlghn
-2022-08-26 14:12:26,379 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:26,380 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39551', status: init, memory: 0, processing: 0>
-2022-08-26 14:12:26,381 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39551
-2022-08-26 14:12:26,381 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:26,381 - distributed.worker - WARNING - Mismatched versions found
-
-+---------+---------------------------------------------+-----------+-----------------------+
-| Package | Worker-0121d61e-706c-4b4e-9739-c0193d88fc4a | Scheduler | Workers               |
-+---------+---------------------------------------------+-----------+-----------------------+
-| dask    | 2022.8.1                                    | 2022.8.1  | {'0.0.0', '2022.8.1'} |
-+---------+---------------------------------------------+-----------+-----------------------+
-2022-08-26 14:12:26,381 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33099
-2022-08-26 14:12:26,381 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:26,381 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:39551
-2022-08-26 14:12:26,382 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:26,382 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-0121d61e-706c-4b4e-9739-c0193d88fc4a Address tcp://127.0.0.1:39551 Status: Status.closing
-2022-08-26 14:12:26,382 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:39551', status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:26,382 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:39551
-2022-08-26 14:12:26,383 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:38109
-2022-08-26 14:12:26,383 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:40775
-2022-08-26 14:12:26,384 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:38109', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:26,384 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:38109
-2022-08-26 14:12:26,384 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:40775', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:26,384 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:40775
-2022-08-26 14:12:26,385 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:12:26,385 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-add19be3-4357-4089-8dc7-8dde3ce15dd0 Address tcp://127.0.0.1:38109 Status: Status.closing
-2022-08-26 14:12:26,385 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-dd20aa91-84c7-4f5f-b401-fe7ce784ada8 Address tcp://127.0.0.1:40775 Status: Status.closing
-2022-08-26 14:12:26,386 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:12:26,386 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_versions.py::test_python_version PASSED
-distributed/tests/test_versions.py::test_version_custom_pkgs PASSED
-distributed/tests/test_worker.py::test_worker_nthreads 2022-08-26 14:12:26,628 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:26,630 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:26,630 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:45085
-2022-08-26 14:12:26,630 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:45359
-2022-08-26 14:12:26,633 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40767
-2022-08-26 14:12:26,633 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40767
-2022-08-26 14:12:26,633 - distributed.worker - INFO -          dashboard at:            127.0.0.1:37037
-2022-08-26 14:12:26,633 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:45085
-2022-08-26 14:12:26,633 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:26,633 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:12:26,634 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:26,634 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-5scou3jc
-2022-08-26 14:12:26,634 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:26,635 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40767', status: init, memory: 0, processing: 0>
-2022-08-26 14:12:26,636 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40767
-2022-08-26 14:12:26,636 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:26,636 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:45085
-2022-08-26 14:12:26,636 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:26,636 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:40767
-2022-08-26 14:12:26,637 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:26,637 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-54da90b8-d47f-4475-8400-0e9b5eba2b88 Address tcp://127.0.0.1:40767 Status: Status.closing
-2022-08-26 14:12:26,638 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:40767', status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:26,638 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:40767
-2022-08-26 14:12:26,638 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:12:26,638 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:12:26,638 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_str 2022-08-26 14:12:26,866 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:26,867 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:26,868 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:44717
-2022-08-26 14:12:26,868 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:42317
-2022-08-26 14:12:26,872 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38135
-2022-08-26 14:12:26,872 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38135
-2022-08-26 14:12:26,872 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:12:26,872 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43331
-2022-08-26 14:12:26,872 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44717
-2022-08-26 14:12:26,872 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:26,872 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:26,872 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:26,872 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-1pzgkiut
-2022-08-26 14:12:26,872 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:26,873 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45133
-2022-08-26 14:12:26,873 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45133
-2022-08-26 14:12:26,873 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:12:26,873 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39947
-2022-08-26 14:12:26,873 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44717
-2022-08-26 14:12:26,873 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:26,873 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:12:26,873 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:26,873 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ldpb8vbd
-2022-08-26 14:12:26,873 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:26,876 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38135', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:26,876 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38135
-2022-08-26 14:12:26,877 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:26,877 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45133', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:26,877 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45133
-2022-08-26 14:12:26,877 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:26,877 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44717
-2022-08-26 14:12:26,877 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:26,878 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44717
-2022-08-26 14:12:26,878 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:26,878 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:26,878 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:26,889 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:38135
-2022-08-26 14:12:26,889 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:45133
-2022-08-26 14:12:26,890 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:38135', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:26,890 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:38135
-2022-08-26 14:12:26,890 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:45133', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:26,891 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:45133
-2022-08-26 14:12:26,891 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:12:26,891 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-96300bf2-44ec-40cc-9cb2-9d027dff32fb Address tcp://127.0.0.1:38135 Status: Status.closing
-2022-08-26 14:12:26,891 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-fe35ecb9-b6a6-4a1c-8e9b-e7c247fbc27b Address tcp://127.0.0.1:45133 Status: Status.closing
-2022-08-26 14:12:26,892 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:12:26,892 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_identity 2022-08-26 14:12:27,119 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:27,121 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:27,121 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:35811
-2022-08-26 14:12:27,121 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:33639
-2022-08-26 14:12:27,124 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39631
-2022-08-26 14:12:27,124 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39631
-2022-08-26 14:12:27,124 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34489
-2022-08-26 14:12:27,124 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35811
-2022-08-26 14:12:27,124 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:27,124 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:12:27,124 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:27,124 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-t94fn5jr
-2022-08-26 14:12:27,124 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:27,126 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39631', status: init, memory: 0, processing: 0>
-2022-08-26 14:12:27,126 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39631
-2022-08-26 14:12:27,126 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:27,127 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35811
-2022-08-26 14:12:27,127 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:27,127 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:39631
-2022-08-26 14:12:27,127 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:27,127 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-7d30b1bb-0830-498d-bcf4-c18ffa4a5925 Address tcp://127.0.0.1:39631 Status: Status.closing
-2022-08-26 14:12:27,128 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:39631', status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:27,128 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:39631
-2022-08-26 14:12:27,128 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:12:27,129 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:12:27,129 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_worker_bad_args 2022-08-26 14:12:27,356 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:27,357 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:27,358 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:33899
-2022-08-26 14:12:27,358 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:41273
-2022-08-26 14:12:27,362 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40835
-2022-08-26 14:12:27,362 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40835
-2022-08-26 14:12:27,362 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:12:27,362 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33385
-2022-08-26 14:12:27,362 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33899
-2022-08-26 14:12:27,362 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:27,362 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:27,362 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:27,362 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-igbo_1uc
-2022-08-26 14:12:27,363 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:27,363 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42955
-2022-08-26 14:12:27,363 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42955
-2022-08-26 14:12:27,363 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:12:27,363 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45077
-2022-08-26 14:12:27,363 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33899
-2022-08-26 14:12:27,363 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:27,363 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:12:27,363 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:27,363 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-h8vbk9ox
-2022-08-26 14:12:27,363 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:27,366 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40835', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:27,366 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40835
-2022-08-26 14:12:27,367 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:27,367 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42955', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:27,367 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42955
-2022-08-26 14:12:27,367 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:27,367 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33899
-2022-08-26 14:12:27,368 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:27,368 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33899
-2022-08-26 14:12:27,368 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:27,368 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:27,368 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:27,382 - distributed.scheduler - INFO - Receive client connection: Client-c9bb5363-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:27,382 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:27,405 - distributed.worker - DEBUG - Request 1 keys from tcp://127.0.0.1:40835
-2022-08-26 14:12:27,408 - distributed.worker - WARNING - Compute Failed
-Key:       bad_func-b282998df5c7bd130c0c74176c4f97f8
-Function:  bad_func
-args:      (< could not convert arg to str >)
-kwargs:    {'k': < could not convert arg to str >}
-Exception: "ZeroDivisionError('division by zero')"
-
-2022-08-26 14:12:27,425 - distributed.scheduler - INFO - Remove client Client-c9bb5363-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:27,425 - distributed.scheduler - INFO - Remove client Client-c9bb5363-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:27,426 - distributed.scheduler - INFO - Close client connection: Client-c9bb5363-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:27,427 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:40835
-2022-08-26 14:12:27,428 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42955
-2022-08-26 14:12:27,428 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-80b8c297-eeca-45a3-88a6-a2bbb882adb4 Address tcp://127.0.0.1:40835 Status: Status.closing
-2022-08-26 14:12:27,429 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-3ea11020-da7a-4ad0-a9ad-f318c00ec23a Address tcp://127.0.0.1:42955 Status: Status.closing
-2022-08-26 14:12:27,429 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:40835', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:27,429 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:40835
-2022-08-26 14:12:27,430 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:42955', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:27,430 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:42955
-2022-08-26 14:12:27,430 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:12:27,431 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:12:27,431 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_upload_file 2022-08-26 14:12:27,659 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:27,661 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:27,661 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:35995
-2022-08-26 14:12:27,661 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:33783
-2022-08-26 14:12:27,666 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33583
-2022-08-26 14:12:27,666 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33583
-2022-08-26 14:12:27,666 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:12:27,666 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40453
-2022-08-26 14:12:27,666 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35995
-2022-08-26 14:12:27,666 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:27,666 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:27,666 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:27,666 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-brl_upxz
-2022-08-26 14:12:27,666 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:27,667 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37411
-2022-08-26 14:12:27,667 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37411
-2022-08-26 14:12:27,667 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:12:27,667 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43961
-2022-08-26 14:12:27,667 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35995
-2022-08-26 14:12:27,667 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:27,667 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:12:27,667 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:27,667 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-dxy4rvji
-2022-08-26 14:12:27,667 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:27,670 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33583', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:27,670 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33583
-2022-08-26 14:12:27,670 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:27,671 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37411', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:27,671 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37411
-2022-08-26 14:12:27,671 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:27,671 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35995
-2022-08-26 14:12:27,671 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:27,672 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35995
-2022-08-26 14:12:27,672 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:27,672 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:27,672 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:27,685 - distributed.scheduler - INFO - Receive client connection: Client-c9e9adb9-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:27,686 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:27,691 - distributed.utils - INFO - Reload module foobar from .py file
-2022-08-26 14:12:27,693 - distributed.utils - INFO - Reload module foobar from .py file
-2022-08-26 14:12:27,708 - distributed.scheduler - INFO - Remove client Client-c9e9adb9-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:27,708 - distributed.scheduler - INFO - Remove client Client-c9e9adb9-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:27,708 - distributed.scheduler - INFO - Close client connection: Client-c9e9adb9-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:27,709 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:12:27,709 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:12:27,710 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:37411', name: 1, status: running, memory: 0, processing: 0>
-2022-08-26 14:12:27,710 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:37411
-2022-08-26 14:12:27,710 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:37411
-2022-08-26 14:12:27,711 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:33583', name: 0, status: running, memory: 0, processing: 0>
-2022-08-26 14:12:27,711 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:33583
-2022-08-26 14:12:27,711 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:12:27,711 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-158281dc-d19e-4f21-99e9-e3e7590987b4 Address tcp://127.0.0.1:37411 Status: Status.closing
-2022-08-26 14:12:27,711 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:33583
-2022-08-26 14:12:27,712 - distributed.batched - INFO - Batched Comm Closed <TCP (closed) Worker->Scheduler local=tcp://127.0.0.1:50746 remote=tcp://127.0.0.1:35995>
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/batched.py", line 115, in _background_send
-    nbytes = yield coro
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/tornado/gen.py", line 769, in run
-    value = future.result()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 269, in write
-    raise CommClosedError()
-distributed.comm.core.CommClosedError
-2022-08-26 14:12:27,712 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-889b30d7-8b0f-4d6e-a94e-580be7740e06 Address tcp://127.0.0.1:33583 Status: Status.closing
-2022-08-26 14:12:27,712 - distributed.batched - INFO - Batched Comm Closed <TCP (closed) Worker->Scheduler local=tcp://127.0.0.1:50736 remote=tcp://127.0.0.1:35995>
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/batched.py", line 115, in _background_send
-    nbytes = yield coro
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/tornado/gen.py", line 769, in run
-    value = future.result()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 269, in write
-    raise CommClosedError()
-distributed.comm.core.CommClosedError
-PASSED
-distributed/tests/test_worker.py::test_upload_file_pyc SKIPPED (don'...)
-distributed/tests/test_worker.py::test_upload_egg 2022-08-26 14:12:27,943 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:27,944 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:27,945 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:39093
-2022-08-26 14:12:27,945 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:33389
-2022-08-26 14:12:27,949 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34227
-2022-08-26 14:12:27,949 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34227
-2022-08-26 14:12:27,949 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:12:27,949 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42177
-2022-08-26 14:12:27,949 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39093
-2022-08-26 14:12:27,949 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:27,949 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:27,949 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:27,950 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-_e3o6gao
-2022-08-26 14:12:27,950 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:27,950 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43331
-2022-08-26 14:12:27,950 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43331
-2022-08-26 14:12:27,950 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:12:27,950 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36985
-2022-08-26 14:12:27,950 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39093
-2022-08-26 14:12:27,950 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:27,950 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:12:27,950 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:27,950 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-wsvwgalz
-2022-08-26 14:12:27,951 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:27,954 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:34227', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:27,954 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:34227
-2022-08-26 14:12:27,954 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:27,954 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:43331', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:27,954 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:43331
-2022-08-26 14:12:27,955 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:27,955 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39093
-2022-08-26 14:12:27,955 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:27,955 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39093
-2022-08-26 14:12:27,955 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:27,955 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:27,955 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:27,969 - distributed.scheduler - INFO - Receive client connection: Client-ca14f341-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:27,969 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:27,972 - distributed.worker - INFO - Starting Worker plugin /home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/tests/testegg-1.0.0-py3.4.egg0ebf817e-5651-405e-961d-bf3232e5b1d3
-2022-08-26 14:12:27,975 - distributed.utils - INFO - Reload module playproject from .egg file
-2022-08-26 14:12:27,975 - distributed.utils - INFO - Reload module testegg from .egg file
-2022-08-26 14:12:27,976 - distributed.worker - INFO - Starting Worker plugin /home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/tests/testegg-1.0.0-py3.4.egg0ebf817e-5651-405e-961d-bf3232e5b1d3
-2022-08-26 14:12:27,978 - distributed.utils - INFO - Reload module playproject from .egg file
-2022-08-26 14:12:27,978 - distributed.utils - INFO - Reload module testegg from .egg file
-2022-08-26 14:12:27,991 - distributed.scheduler - INFO - Remove client Client-ca14f341-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:27,991 - distributed.scheduler - INFO - Remove client Client-ca14f341-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:27,992 - distributed.scheduler - INFO - Close client connection: Client-ca14f341-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:27,992 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:12:27,993 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:12:27,993 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:43331', name: 1, status: running, memory: 0, processing: 0>
-2022-08-26 14:12:27,993 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:43331
-2022-08-26 14:12:27,993 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:43331
-2022-08-26 14:12:27,994 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:34227', name: 0, status: running, memory: 0, processing: 0>
-2022-08-26 14:12:27,994 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:34227
-2022-08-26 14:12:27,994 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:12:27,994 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-08745e94-2862-4eac-9c17-9ef58b9e67c5 Address tcp://127.0.0.1:43331 Status: Status.closing
-2022-08-26 14:12:27,994 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:34227
-2022-08-26 14:12:27,995 - distributed.batched - INFO - Batched Comm Closed <TCP (closed) Worker->Scheduler local=tcp://127.0.0.1:42552 remote=tcp://127.0.0.1:39093>
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/batched.py", line 115, in _background_send
-    nbytes = yield coro
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/tornado/gen.py", line 769, in run
-    value = future.result()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 269, in write
-    raise CommClosedError()
-distributed.comm.core.CommClosedError
-2022-08-26 14:12:27,995 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-56232ece-6c04-414d-8bcc-4d3022e86e81 Address tcp://127.0.0.1:34227 Status: Status.closing
-2022-08-26 14:12:27,995 - distributed.batched - INFO - Batched Comm Closed <TCP (closed) Worker->Scheduler local=tcp://127.0.0.1:42550 remote=tcp://127.0.0.1:39093>
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/batched.py", line 115, in _background_send
-    nbytes = yield coro
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/tornado/gen.py", line 769, in run
-    value = future.result()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 269, in write
-    raise CommClosedError()
-distributed.comm.core.CommClosedError
-PASSED
-distributed/tests/test_worker.py::test_upload_pyz 2022-08-26 14:12:28,224 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:28,226 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:28,226 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:39119
-2022-08-26 14:12:28,226 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:46869
-2022-08-26 14:12:28,230 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39757
-2022-08-26 14:12:28,231 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39757
-2022-08-26 14:12:28,231 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:12:28,231 - distributed.worker - INFO -          dashboard at:            127.0.0.1:38117
-2022-08-26 14:12:28,231 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39119
-2022-08-26 14:12:28,231 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:28,231 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:28,231 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:28,231 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-vy7zhk0w
-2022-08-26 14:12:28,231 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:28,231 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38025
-2022-08-26 14:12:28,231 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38025
-2022-08-26 14:12:28,232 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:12:28,232 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45603
-2022-08-26 14:12:28,232 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39119
-2022-08-26 14:12:28,232 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:28,232 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:12:28,232 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:28,232 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-yevqhtvy
-2022-08-26 14:12:28,232 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:28,235 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39757', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:28,235 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39757
-2022-08-26 14:12:28,235 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:28,235 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38025', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:28,236 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38025
-2022-08-26 14:12:28,236 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:28,236 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39119
-2022-08-26 14:12:28,236 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:28,236 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39119
-2022-08-26 14:12:28,236 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:28,237 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:28,237 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:28,250 - distributed.scheduler - INFO - Receive client connection: Client-ca3fdbdf-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:28,251 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:28,253 - distributed.worker - INFO - Starting Worker plugin /home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/tests/mytest.pyze06904e1-43c1-43cc-96fb-1b57f5796763
-2022-08-26 14:12:28,256 - distributed.utils - INFO - Reload module mytest from .pyz file
-2022-08-26 14:12:28,256 - distributed.worker - INFO - Starting Worker plugin /home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/tests/mytest.pyze06904e1-43c1-43cc-96fb-1b57f5796763
-2022-08-26 14:12:28,258 - distributed.utils - INFO - Reload module mytest from .pyz file
-2022-08-26 14:12:28,273 - distributed.scheduler - INFO - Remove client Client-ca3fdbdf-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:28,273 - distributed.scheduler - INFO - Remove client Client-ca3fdbdf-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:28,273 - distributed.scheduler - INFO - Close client connection: Client-ca3fdbdf-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:28,274 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:12:28,274 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:12:28,275 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:38025', name: 1, status: running, memory: 0, processing: 0>
-2022-08-26 14:12:28,275 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:38025
-2022-08-26 14:12:28,275 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:38025
-2022-08-26 14:12:28,276 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:39757', name: 0, status: running, memory: 0, processing: 0>
-2022-08-26 14:12:28,276 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:39757
-2022-08-26 14:12:28,276 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:12:28,276 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-e39e9970-daeb-427d-b941-e35e8ef171ac Address tcp://127.0.0.1:38025 Status: Status.closing
-2022-08-26 14:12:28,276 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:39757
-2022-08-26 14:12:28,276 - distributed.batched - INFO - Batched Comm Closed <TCP (closed) Worker->Scheduler local=tcp://127.0.0.1:32814 remote=tcp://127.0.0.1:39119>
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/batched.py", line 115, in _background_send
-    nbytes = yield coro
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/tornado/gen.py", line 769, in run
-    value = future.result()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 269, in write
-    raise CommClosedError()
-distributed.comm.core.CommClosedError
-2022-08-26 14:12:28,277 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-1efaeb08-c44e-4b03-af22-d23d11a5fab2 Address tcp://127.0.0.1:39757 Status: Status.closing
-2022-08-26 14:12:28,277 - distributed.batched - INFO - Batched Comm Closed <TCP (closed) Worker->Scheduler local=tcp://127.0.0.1:32800 remote=tcp://127.0.0.1:39119>
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/batched.py", line 115, in _background_send
-    nbytes = yield coro
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/tornado/gen.py", line 769, in run
-    value = future.result()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 269, in write
-    raise CommClosedError()
-distributed.comm.core.CommClosedError
-PASSED
-distributed/tests/test_worker.py::test_upload_large_file 2022-08-26 14:12:28,506 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:28,508 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:28,508 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:34815
-2022-08-26 14:12:28,508 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:33973
-2022-08-26 14:12:28,513 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34079
-2022-08-26 14:12:28,513 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34079
-2022-08-26 14:12:28,513 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:12:28,513 - distributed.worker - INFO -          dashboard at:            127.0.0.1:44727
-2022-08-26 14:12:28,513 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34815
-2022-08-26 14:12:28,513 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:28,513 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:28,513 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:28,513 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-kpvb6nmh
-2022-08-26 14:12:28,513 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:28,514 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34157
-2022-08-26 14:12:28,514 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34157
-2022-08-26 14:12:28,514 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:12:28,514 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41043
-2022-08-26 14:12:28,514 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34815
-2022-08-26 14:12:28,514 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:28,514 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:12:28,514 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:28,514 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-udvbqf_q
-2022-08-26 14:12:28,514 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:28,517 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:34079', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:28,517 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:34079
-2022-08-26 14:12:28,517 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:28,518 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:34157', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:28,518 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:34157
-2022-08-26 14:12:28,518 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:28,518 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34815
-2022-08-26 14:12:28,518 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:28,518 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34815
-2022-08-26 14:12:28,518 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:28,519 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:28,519 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:28,532 - distributed.scheduler - INFO - Receive client connection: Client-ca6ae8dc-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:28,533 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:28,566 - distributed.scheduler - INFO - Remove client Client-ca6ae8dc-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:28,566 - distributed.scheduler - INFO - Remove client Client-ca6ae8dc-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:28,566 - distributed.scheduler - INFO - Close client connection: Client-ca6ae8dc-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:28,566 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:34079
-2022-08-26 14:12:28,567 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:34157
-2022-08-26 14:12:28,568 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:34079', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:28,568 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:34079
-2022-08-26 14:12:28,568 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:34157', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:28,568 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:34157
-2022-08-26 14:12:28,568 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:12:28,568 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-5b8f0ccf-36d1-4f2d-bf40-8b0000a5c6ea Address tcp://127.0.0.1:34079 Status: Status.closing
-2022-08-26 14:12:28,568 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-d68a5c5c-d92f-40bd-9a50-656f805b721c Address tcp://127.0.0.1:34157 Status: Status.closing
-2022-08-26 14:12:28,569 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:12:28,569 - distributed.scheduler - INFO - Scheduler closing all comms
-SKIPPED (co...)
-distributed/tests/test_worker.py::test_broadcast 2022-08-26 14:12:28,575 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:28,577 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:28,577 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:36703
-2022-08-26 14:12:28,577 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:40689
-2022-08-26 14:12:28,582 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39767
-2022-08-26 14:12:28,582 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39767
-2022-08-26 14:12:28,582 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:12:28,582 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43523
-2022-08-26 14:12:28,582 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36703
-2022-08-26 14:12:28,582 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:28,582 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:28,582 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:28,582 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-w6esskvh
-2022-08-26 14:12:28,582 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:28,583 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38317
-2022-08-26 14:12:28,583 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38317
-2022-08-26 14:12:28,583 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:12:28,583 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46327
-2022-08-26 14:12:28,583 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36703
-2022-08-26 14:12:28,583 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:28,583 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:12:28,583 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:28,583 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-l53r44sg
-2022-08-26 14:12:28,583 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:28,586 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39767', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:28,586 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39767
-2022-08-26 14:12:28,586 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:28,587 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38317', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:28,587 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38317
-2022-08-26 14:12:28,587 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:28,587 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36703
-2022-08-26 14:12:28,587 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:28,587 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36703
-2022-08-26 14:12:28,587 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:28,588 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:28,588 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:28,603 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:39767
-2022-08-26 14:12:28,603 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:38317
-2022-08-26 14:12:28,604 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:39767', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:28,604 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:39767
-2022-08-26 14:12:28,604 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:38317', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:28,604 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:38317
-2022-08-26 14:12:28,604 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:12:28,604 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-3da5265b-64b5-4813-a980-a5c6dd41b2ea Address tcp://127.0.0.1:39767 Status: Status.closing
-2022-08-26 14:12:28,605 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-d3885edf-6446-4cc2-a958-4b4c93f5216b Address tcp://127.0.0.1:38317 Status: Status.closing
-2022-08-26 14:12:28,605 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:12:28,606 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_worker_with_port_zero 2022-08-26 14:12:28,835 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:28,837 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:28,837 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:41273
-2022-08-26 14:12:28,837 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:33827
-2022-08-26 14:12:28,840 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:41973
-2022-08-26 14:12:28,840 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:41973
-2022-08-26 14:12:28,840 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39105
-2022-08-26 14:12:28,840 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:41273
-2022-08-26 14:12:28,840 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:28,840 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:12:28,840 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:28,840 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-61d48bbo
-2022-08-26 14:12:28,840 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:28,842 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:41973', status: init, memory: 0, processing: 0>
-2022-08-26 14:12:28,842 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:41973
-2022-08-26 14:12:28,842 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:28,843 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:41273
-2022-08-26 14:12:28,843 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:28,843 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:41973
-2022-08-26 14:12:28,843 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:28,844 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-3b568996-03a2-478f-b56c-22518ccc0a0d Address tcp://127.0.0.1:41973 Status: Status.closing
-2022-08-26 14:12:28,844 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:41973', status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:28,844 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:41973
-2022-08-26 14:12:28,844 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:12:28,845 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:12:28,845 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_worker_port_range 2022-08-26 14:12:29,073 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:29,074 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:29,075 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:46851
-2022-08-26 14:12:29,075 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:46423
-2022-08-26 14:12:29,077 - distributed.worker - INFO -       Start worker at:       tcp://127.0.0.1:9867
-2022-08-26 14:12:29,078 - distributed.worker - INFO -          Listening to:       tcp://127.0.0.1:9867
-2022-08-26 14:12:29,078 - distributed.worker - INFO -          dashboard at:            127.0.0.1:44981
-2022-08-26 14:12:29,078 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:46851
-2022-08-26 14:12:29,078 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:29,078 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:12:29,078 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:29,078 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-4xlk1u0f
-2022-08-26 14:12:29,078 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:29,080 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:9867', status: init, memory: 0, processing: 0>
-2022-08-26 14:12:29,080 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:9867
-2022-08-26 14:12:29,080 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:29,080 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:46851
-2022-08-26 14:12:29,080 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:29,083 - distributed.worker - INFO -       Start worker at:       tcp://127.0.0.1:9868
-2022-08-26 14:12:29,083 - distributed.worker - INFO -          Listening to:       tcp://127.0.0.1:9868
-2022-08-26 14:12:29,083 - distributed.worker - INFO -          dashboard at:            127.0.0.1:37335
-2022-08-26 14:12:29,083 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:46851
-2022-08-26 14:12:29,083 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:29,083 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:12:29,083 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:29,083 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ceu9kevi
-2022-08-26 14:12:29,083 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:29,084 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:29,085 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:9868', status: init, memory: 0, processing: 0>
-2022-08-26 14:12:29,086 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:9868
-2022-08-26 14:12:29,086 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:29,086 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:46851
-2022-08-26 14:12:29,086 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:29,088 - distributed.worker - INFO - Stopping worker
-2022-08-26 14:12:29,088 - distributed.worker - INFO - Closed worker has not yet started: Status.init
-2022-08-26 14:12:29,089 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:29,090 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:9868
-2022-08-26 14:12:29,090 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-d971be44-58ed-4f7e-bade-cc1d7f76c928 Address tcp://127.0.0.1:9868 Status: Status.closing
-2022-08-26 14:12:29,091 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:9868', status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:29,091 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:9868
-2022-08-26 14:12:29,091 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:9867
-2022-08-26 14:12:29,092 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:9867', status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:29,092 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:9867
-2022-08-26 14:12:29,092 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:12:29,092 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-d07f8be1-bdaf-4f9b-b757-8f0be64c634a Address tcp://127.0.0.1:9867 Status: Status.closing
-2022-08-26 14:12:29,093 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:12:29,093 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_worker_waits_for_scheduler SKIPPED
-distributed/tests/test_worker.py::test_worker_task_data 2022-08-26 14:12:29,321 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:29,322 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:29,323 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:33669
-2022-08-26 14:12:29,323 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:45133
-2022-08-26 14:12:29,325 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:41191
-2022-08-26 14:12:29,326 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:41191
-2022-08-26 14:12:29,326 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:12:29,326 - distributed.worker - INFO -          dashboard at:            127.0.0.1:37789
-2022-08-26 14:12:29,326 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33669
-2022-08-26 14:12:29,326 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:29,326 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:29,326 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:29,326 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-c4kh9x7t
-2022-08-26 14:12:29,326 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:29,328 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:41191', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:29,328 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:41191
-2022-08-26 14:12:29,328 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:29,328 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33669
-2022-08-26 14:12:29,328 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:29,329 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:29,342 - distributed.scheduler - INFO - Receive client connection: Client-cae66e27-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:29,342 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:29,364 - distributed.scheduler - INFO - Remove client Client-cae66e27-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:29,364 - distributed.scheduler - INFO - Remove client Client-cae66e27-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:29,364 - distributed.scheduler - INFO - Close client connection: Client-cae66e27-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:29,365 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:41191
-2022-08-26 14:12:29,365 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-90054fd2-8868-4196-85c7-fb1ffc67a5e2 Address tcp://127.0.0.1:41191 Status: Status.closing
-2022-08-26 14:12:29,366 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:41191', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:29,366 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:41191
-2022-08-26 14:12:29,366 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:12:29,366 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:12:29,367 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_error_message 2022-08-26 14:12:29,590 - distributed.protocol.pickle - INFO - Failed to deserialize b'\x80\x05\x95\x85\x04\x00\x00\x00\x00\x00\x00\x8c\x16tblib.pickling_support\x94\x8c\x12unpickle_exception\x94\x93\x94(\x8c\x17cloudpickle.cloudpickle\x94\x8c\x14_make_skeleton_class\x94\x93\x94(\x8c\x08builtins\x94\x8c\x04type\x94\x93\x94\x8c\x0bMyException\x94h\x06\x8c\tException\x94\x93\x94\x85\x94}\x94\x8c 0ef04053d685423f9dc91f538bbd76d1\x94Nt\x94R\x94\x8c\x1ccloudpickle.cloudpickle_fast\x94\x8c\x0f_class_setstate\x94\x93\x94h\x10}\x94(\x8c\n__module__\x94\x8c\x0btest_worker\x94\x8c\x08__init__\x94h\x03\x8c\x0e_make_function\x94\x93\x94(h\x03\x8c\r_builtin_type\x94\x93\x94\x8c\x08CodeType\x94\x85\x94R\x94(K\x03K\x00K\x00K\x03K\x02JS\x00\x00\x01C\x10|\x01|\x02\x17\x00f\x01|\x00_\x00d\x00S\x00\x94N\x85\x94\x8c\x04args\x94\x85\x94\x8c\x04self\x94\x8c\x01a\x94\x8c\x01b\x94\x87\x94\x8cg/home/matthew/pkgsrc/work/wip/py-distributed/work/dist
ributed-2022.8.1/distributed/tests/test_worker.py\x94h\x17Mq\x01C\x02\x10\x01\x94))t\x94R\x94}\x94(\x8c\x0b__package__\x94\x8c\x00\x94\x8c\x08__name__\x94h\x16\x8c\x08__file__\x94\x8cg/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/tests/test_worker.py\x94uNNNt\x94R\x94h\x11\x8c\x12_function_setstate\x94\x93\x94h2}\x94}\x94(h.h\x17\x8c\x0c__qualname__\x94\x8c0test_error_message.<locals>.MyException.__init__\x94\x8c\x0f__annotations__\x94}\x94\x8c\x0e__kwdefaults__\x94N\x8c\x0c__defaults__\x94Nh\x15h\x16\x8c\x07__doc__\x94N\x8c\x0b__closure__\x94N\x8c\x17_cloudpickle_submodules\x94]\x94\x8c\x0b__globals__\x94}\x94u\x86\x94\x86R0\x8c\x07__str__\x94h\x19(h\x1e(K\x01K\x00K\x00K\x01K\x02JS\x00\x00\x01C\nd\x01|\x00j\x00\x16\x00S\x00\x94N\x8c\x0fMyException(%s)\x94\x86\x94h"h#\x85\x94h\'hDMt\x01C\x02\n\x01\x94))t\x94R\x94h+NNNt\x94R\x94h4hM}\x94}\x94(h.hDh7\x8c/test_error_message.<locals>.MyException.__str__\x94h9}\x94h;Nh<Nh\x15h\x16h=Nh>Nh?]\x94hA}\x94u\
 x86\x94\x86R0h=Nu}\x94\x86\x94\x86R0\x8c\x0bHelloWorld!\x94\x85\x94NNt\x94R\x94.'
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/pickle.py", line 73, in loads
-    return pickle.loads(x)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/tblib/pickling_support.py", line 26, in unpickle_exception
-    inst = func(*args)
-TypeError: test_error_message.<locals>.MyException.__init__() missing 1 required positional argument: 'b'
-PASSED
-distributed/tests/test_worker.py::test_chained_error_message 2022-08-26 14:12:29,600 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:29,601 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:29,602 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:35121
-2022-08-26 14:12:29,602 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:33189
-2022-08-26 14:12:29,606 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39877
-2022-08-26 14:12:29,606 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39877
-2022-08-26 14:12:29,606 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:12:29,606 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43399
-2022-08-26 14:12:29,606 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35121
-2022-08-26 14:12:29,606 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:29,606 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:29,607 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:29,607 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ftu0ove_
-2022-08-26 14:12:29,607 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:29,607 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42929
-2022-08-26 14:12:29,607 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42929
-2022-08-26 14:12:29,607 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:12:29,607 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45641
-2022-08-26 14:12:29,607 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35121
-2022-08-26 14:12:29,607 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:29,607 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:12:29,607 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:29,608 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-6maoh2ny
-2022-08-26 14:12:29,608 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:29,610 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39877', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:29,611 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39877
-2022-08-26 14:12:29,611 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:29,611 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42929', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:29,611 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42929
-2022-08-26 14:12:29,611 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:29,612 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35121
-2022-08-26 14:12:29,612 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:29,612 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35121
-2022-08-26 14:12:29,612 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:29,612 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:29,612 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:29,626 - distributed.scheduler - INFO - Receive client connection: Client-cb11c772-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:29,626 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:29,639 - distributed.worker - WARNING - Compute Failed
-Key:       chained_exception_fn-76ceafd392ed56881e2c3c678439fc52
-Function:  chained_exception_fn
-args:      ()
-kwargs:    {}
-Exception: "MyException('Foo')"
-
-2022-08-26 14:12:29,648 - distributed.scheduler - INFO - Remove client Client-cb11c772-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:29,649 - distributed.scheduler - INFO - Remove client Client-cb11c772-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:29,649 - distributed.scheduler - INFO - Close client connection: Client-cb11c772-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:29,650 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:39877
-2022-08-26 14:12:29,650 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42929
-2022-08-26 14:12:29,651 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:39877', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:29,651 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:39877
-2022-08-26 14:12:29,651 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:42929', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:29,651 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:42929
-2022-08-26 14:12:29,651 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:12:29,651 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-919c481f-c6a5-4544-9bcf-dcec840af0e6 Address tcp://127.0.0.1:39877 Status: Status.closing
-2022-08-26 14:12:29,652 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-2ce7f25c-aefb-447f-b272-7437207a4fa1 Address tcp://127.0.0.1:42929 Status: Status.closing
-2022-08-26 14:12:29,652 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:12:29,653 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_plugin_exception 2022-08-26 14:12:29,900 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:29,902 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:29,902 - distributed.scheduler - INFO -   Scheduler at: tcp://192.168.1.159:46149
-2022-08-26 14:12:29,902 - distributed.scheduler - INFO -   dashboard at:                    :36469
-2022-08-26 14:12:29,905 - distributed.worker - INFO -       Start worker at:  tcp://192.168.1.159:34327
-2022-08-26 14:12:29,905 - distributed.worker - INFO -          Listening to:  tcp://192.168.1.159:34327
-2022-08-26 14:12:29,905 - distributed.worker - INFO -          dashboard at:        192.168.1.159:37127
-2022-08-26 14:12:29,905 - distributed.worker - INFO - Waiting to connect to:  tcp://192.168.1.159:46149
-2022-08-26 14:12:29,905 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:29,905 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:12:29,905 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:29,905 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-711v1snb
-2022-08-26 14:12:29,905 - distributed.worker - INFO - Starting Worker plugin MyPlugin-dc720da4-0072-4157-ad70-74c96d86b996
-2022-08-26 14:12:29,907 - distributed.worker - INFO - Stopping worker at tcp://192.168.1.159:34327
-2022-08-26 14:12:29,907 - distributed.worker - INFO - Closed worker has not yet started: Status.init
-2022-08-26 14:12:29,908 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:12:29,908 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_plugin_multiple_exceptions 2022-08-26 14:12:29,933 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:29,936 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:29,936 - distributed.scheduler - INFO -   Scheduler at: tcp://192.168.1.159:44333
-2022-08-26 14:12:29,936 - distributed.scheduler - INFO -   dashboard at:                    :45505
-2022-08-26 14:12:29,939 - distributed.worker - INFO -       Start worker at:  tcp://192.168.1.159:42519
-2022-08-26 14:12:29,939 - distributed.worker - INFO -          Listening to:  tcp://192.168.1.159:42519
-2022-08-26 14:12:29,939 - distributed.worker - INFO -          dashboard at:        192.168.1.159:42947
-2022-08-26 14:12:29,939 - distributed.worker - INFO - Waiting to connect to:  tcp://192.168.1.159:44333
-2022-08-26 14:12:29,939 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:29,939 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:12:29,939 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:29,939 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-aorfrud7
-2022-08-26 14:12:29,939 - distributed.worker - INFO - Starting Worker plugin MyPlugin2-50f7f769-65cb-40d9-8ea0-183e3f3b5951
-2022-08-26 14:12:29,941 - distributed.worker - INFO - Starting Worker plugin MyPlugin1-777a96fe-f9ab-44c7-aca1-7196afbbebe7
-2022-08-26 14:12:29,942 - distributed.worker - ERROR - Multiple plugin exceptions raised. All exceptions will be logged, the first is raised.
-2022-08-26 14:12:29,942 - distributed.worker - ERROR - RuntimeError('MyPlugin2 Error')
-2022-08-26 14:12:29,942 - distributed.worker - ERROR - ValueError('MyPlugin1 Error')
-2022-08-26 14:12:29,942 - distributed.worker - INFO - Stopping worker at tcp://192.168.1.159:42519
-2022-08-26 14:12:29,942 - distributed.worker - INFO - Closed worker has not yet started: Status.init
-2022-08-26 14:12:29,943 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:12:29,943 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_plugin_internal_exception 2022-08-26 14:12:29,949 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:29,950 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:29,950 - distributed.scheduler - INFO -   Scheduler at: tcp://192.168.1.159:37907
-2022-08-26 14:12:29,950 - distributed.scheduler - INFO -   dashboard at:                     :8787
-2022-08-26 14:12:29,953 - distributed.worker - INFO -       Start worker at:  tcp://192.168.1.159:37633
-2022-08-26 14:12:29,953 - distributed.worker - INFO -          Listening to:  tcp://192.168.1.159:37633
-2022-08-26 14:12:29,953 - distributed.worker - INFO -          dashboard at:        192.168.1.159:42195
-2022-08-26 14:12:29,953 - distributed.worker - INFO - Waiting to connect to:  tcp://192.168.1.159:37907
-2022-08-26 14:12:29,953 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:29,953 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:12:29,954 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:29,954 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-1pm__tp6
-2022-08-26 14:12:29,954 - distributed.protocol.pickle - INFO - Failed to deserialize b'corrupting pickle\x80\x04\x95\xaf\x02\x00\x00\x00\x00\x00\x00\x8c\x17cloudpickle.cloudpickle\x94\x8c\x0e_make_function\x94\x93\x94(h\x00\x8c\r_builtin_type\x94\x93\x94\x8c\x08CodeType\x94\x85\x94R\x94(K\x00K\x00K\x00K\x00K\x01JS\x00\x00\x01C\x04d\x00S\x00\x94N\x85\x94))\x8cg/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/tests/test_worker.py\x94\x8c\x08<lambda>\x94M\xe1\x01C\x02\x04\x00\x94))t\x94R\x94}\x94(\x8c\x0b__package__\x94\x8c\x00\x94\x8c\x08__name__\x94\x8c\x0btest_worker\x94\x8c\x08__file__\x94\x8cg/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/tests/test_worker.py\x94uNNNt\x94R\x94\x8c\x1ccloudpickle.cloudpickle_fast\x94\x8c\x12_function_setstate\x94\x93\x94h\x17}\x94}\x94(h\x12h\x0b\x8c\x0c__qualname__\x94\x8c0test_plugin_internal_exception.<locals>.<lambda>\x94\x8c\x0f__annotations__\x94}\x94\x8c\x0e__kwdef
aults__\x94N\x8c\x0c__defaults__\x94N\x8c\n__module__\x94h\x13\x8c\x07__doc__\x94N\x8c\x0b__closure__\x94N\x8c\x17_cloudpickle_submodules\x94]\x94\x8c\x0b__globals__\x94}\x94u\x86\x94\x86R0.'
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/pickle.py", line 73, in loads
-    return pickle.loads(x)
-UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 16: invalid start byte
-2022-08-26 14:12:29,955 - distributed.worker - INFO - Stopping worker at tcp://192.168.1.159:37633
-2022-08-26 14:12:29,955 - distributed.worker - INFO - Closed worker has not yet started: Status.init
-2022-08-26 14:12:29,956 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:12:29,956 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_gather 2022-08-26 14:12:29,962 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:29,963 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:29,963 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:36479
-2022-08-26 14:12:29,963 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:33367
-2022-08-26 14:12:29,968 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38677
-2022-08-26 14:12:29,968 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38677
-2022-08-26 14:12:29,968 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:12:29,968 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33733
-2022-08-26 14:12:29,968 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36479
-2022-08-26 14:12:29,968 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:29,968 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:29,968 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:29,968 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-fu9j1347
-2022-08-26 14:12:29,968 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:29,969 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42143
-2022-08-26 14:12:29,969 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42143
-2022-08-26 14:12:29,969 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:12:29,969 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45649
-2022-08-26 14:12:29,969 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36479
-2022-08-26 14:12:29,969 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:29,969 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:12:29,969 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:29,969 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-0c_nsck7
-2022-08-26 14:12:29,969 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:29,972 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38677', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:29,972 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38677
-2022-08-26 14:12:29,972 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:29,973 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42143', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:29,973 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42143
-2022-08-26 14:12:29,973 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:29,973 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36479
-2022-08-26 14:12:29,973 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:29,974 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36479
-2022-08-26 14:12:29,974 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:29,974 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:29,974 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:29,988 - distributed.scheduler - INFO - Receive client connection: Client-cb48f59c-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:29,988 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:29,999 - distributed.scheduler - INFO - Remove client Client-cb48f59c-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:29,999 - distributed.scheduler - INFO - Remove client Client-cb48f59c-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:30,000 - distributed.scheduler - INFO - Close client connection: Client-cb48f59c-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:30,001 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:38677
-2022-08-26 14:12:30,001 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42143
-2022-08-26 14:12:30,002 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:38677', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:30,002 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:38677
-2022-08-26 14:12:30,002 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-f8d731ba-0d6e-4e67-942b-1f3b74310e5d Address tcp://127.0.0.1:38677 Status: Status.closing
-2022-08-26 14:12:30,002 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-c151c691-0f86-4e92-b960-43882f458ab7 Address tcp://127.0.0.1:42143 Status: Status.closing
-2022-08-26 14:12:30,003 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:42143', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:30,003 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:42143
-2022-08-26 14:12:30,003 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:12:30,004 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:12:30,004 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_gather_missing_keys 2022-08-26 14:12:30,234 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:30,236 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:30,236 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:34883
-2022-08-26 14:12:30,236 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:45405
-2022-08-26 14:12:30,241 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36965
-2022-08-26 14:12:30,241 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36965
-2022-08-26 14:12:30,241 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:12:30,241 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33565
-2022-08-26 14:12:30,241 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34883
-2022-08-26 14:12:30,241 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:30,241 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:30,241 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:30,241 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-0yq8bmw9
-2022-08-26 14:12:30,241 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:30,242 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39879
-2022-08-26 14:12:30,242 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39879
-2022-08-26 14:12:30,242 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:12:30,242 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36411
-2022-08-26 14:12:30,242 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34883
-2022-08-26 14:12:30,242 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:30,242 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:12:30,242 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:30,242 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-jdgx9ulp
-2022-08-26 14:12:30,242 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:30,245 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36965', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:30,245 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36965
-2022-08-26 14:12:30,245 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:30,246 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39879', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:30,246 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39879
-2022-08-26 14:12:30,246 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:30,246 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34883
-2022-08-26 14:12:30,246 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:30,246 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34883
-2022-08-26 14:12:30,246 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:30,247 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:30,247 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:30,260 - distributed.scheduler - INFO - Receive client connection: Client-cb72948b-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:30,261 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:30,267 - distributed.worker - WARNING - Could not find data: {'y': ['tcp://127.0.0.1:39879']} on workers: [] (who_has: {'str-557e9fe555e8a44ebc49d07908a3e2d4': ['tcp://127.0.0.1:39879'], 'y': ['tcp://127.0.0.1:39879']})
-2022-08-26 14:12:30,272 - distributed.scheduler - INFO - Remove client Client-cb72948b-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:30,272 - distributed.scheduler - INFO - Remove client Client-cb72948b-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:30,272 - distributed.scheduler - INFO - Close client connection: Client-cb72948b-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:30,273 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36965
-2022-08-26 14:12:30,274 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:39879
-2022-08-26 14:12:30,274 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:36965', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:30,274 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:36965
-2022-08-26 14:12:30,275 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-31121be1-130d-4d33-8550-af7ec2a0b9d4 Address tcp://127.0.0.1:36965 Status: Status.closing
-2022-08-26 14:12:30,275 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-e767f13d-a0a1-41f4-aabf-66eaa32b921c Address tcp://127.0.0.1:39879 Status: Status.closing
-2022-08-26 14:12:30,275 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:39879', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:30,276 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:39879
-2022-08-26 14:12:30,276 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:12:30,276 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:12:30,276 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_gather_missing_workers 2022-08-26 14:12:30,505 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:30,506 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:30,506 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:46367
-2022-08-26 14:12:30,506 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:34927
-2022-08-26 14:12:30,511 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:41841
-2022-08-26 14:12:30,511 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:41841
-2022-08-26 14:12:30,511 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:12:30,511 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45697
-2022-08-26 14:12:30,511 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:46367
-2022-08-26 14:12:30,511 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:30,511 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:30,511 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:30,511 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ta2b2dcx
-2022-08-26 14:12:30,511 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:30,512 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39871
-2022-08-26 14:12:30,512 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39871
-2022-08-26 14:12:30,512 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:12:30,512 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41119
-2022-08-26 14:12:30,512 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:46367
-2022-08-26 14:12:30,512 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:30,512 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:12:30,512 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:30,512 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-kl5bwq51
-2022-08-26 14:12:30,512 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:30,515 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:41841', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:30,515 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:41841
-2022-08-26 14:12:30,515 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:30,516 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39871', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:30,516 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39871
-2022-08-26 14:12:30,516 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:30,516 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:46367
-2022-08-26 14:12:30,516 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:30,517 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:46367
-2022-08-26 14:12:30,517 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:30,517 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:30,517 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:30,531 - distributed.scheduler - INFO - Receive client connection: Client-cb9bcd7c-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:30,531 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:30,636 - distributed.worker - WARNING - Could not find data: {'y': ['tcp://127.0.0.1:12345']} on workers: ['tcp://127.0.0.1:12345'] (who_has: {'str-557e9fe555e8a44ebc49d07908a3e2d4': ['tcp://127.0.0.1:39871'], 'y': ['tcp://127.0.0.1:12345']})
-2022-08-26 14:12:30,647 - distributed.scheduler - INFO - Remove client Client-cb9bcd7c-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:30,647 - distributed.scheduler - INFO - Remove client Client-cb9bcd7c-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:30,648 - distributed.scheduler - INFO - Close client connection: Client-cb9bcd7c-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:30,648 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:41841
-2022-08-26 14:12:30,648 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:39871
-2022-08-26 14:12:30,649 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:41841', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:30,649 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:41841
-2022-08-26 14:12:30,649 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:39871', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:30,650 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:39871
-2022-08-26 14:12:30,650 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:12:30,650 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-fc964647-03e3-493b-a058-ec6d90f63643 Address tcp://127.0.0.1:41841 Status: Status.closing
-2022-08-26 14:12:30,650 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-9026c48a-8ae8-462c-9bf3-d4eafb463fb1 Address tcp://127.0.0.1:39871 Status: Status.closing
-2022-08-26 14:12:30,651 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:12:30,651 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_gather_missing_workers_replicated[False] 2022-08-26 14:12:30,880 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:30,881 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:30,882 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:32817
-2022-08-26 14:12:30,882 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:43933
-2022-08-26 14:12:30,886 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40991
-2022-08-26 14:12:30,886 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40991
-2022-08-26 14:12:30,886 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:12:30,886 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43745
-2022-08-26 14:12:30,886 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:32817
-2022-08-26 14:12:30,886 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:30,886 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:30,886 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:30,886 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-wki5e7on
-2022-08-26 14:12:30,886 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:30,887 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45129
-2022-08-26 14:12:30,887 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45129
-2022-08-26 14:12:30,887 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:12:30,887 - distributed.worker - INFO -          dashboard at:            127.0.0.1:44063
-2022-08-26 14:12:30,887 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:32817
-2022-08-26 14:12:30,887 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:30,887 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:12:30,887 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:30,887 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-77viur7w
-2022-08-26 14:12:30,887 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:30,890 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40991', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:30,891 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40991
-2022-08-26 14:12:30,891 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:30,891 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45129', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:30,891 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45129
-2022-08-26 14:12:30,891 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:30,892 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:32817
-2022-08-26 14:12:30,892 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:30,892 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:32817
-2022-08-26 14:12:30,892 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:30,892 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:30,892 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:30,906 - distributed.scheduler - INFO - Receive client connection: Client-cbd50dff-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:30,906 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:30,917 - distributed.scheduler - INFO - Remove client Client-cbd50dff-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:30,917 - distributed.scheduler - INFO - Remove client Client-cbd50dff-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:30,918 - distributed.scheduler - INFO - Close client connection: Client-cbd50dff-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:30,918 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:40991
-2022-08-26 14:12:30,919 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:45129
-2022-08-26 14:12:30,920 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:40991', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:30,920 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:40991
-2022-08-26 14:12:30,920 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-37544abb-8281-4765-9e04-05ddbf9bd9aa Address tcp://127.0.0.1:40991 Status: Status.closing
-2022-08-26 14:12:30,920 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-18952806-b1ea-471d-90a1-bc46febe38a4 Address tcp://127.0.0.1:45129 Status: Status.closing
-2022-08-26 14:12:30,921 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:45129', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:30,921 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:45129
-2022-08-26 14:12:30,921 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:12:30,921 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:12:30,922 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_gather_missing_workers_replicated[True] 2022-08-26 14:12:31,150 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:31,152 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:31,152 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:41595
-2022-08-26 14:12:31,152 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:44113
-2022-08-26 14:12:31,156 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:44071
-2022-08-26 14:12:31,156 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:44071
-2022-08-26 14:12:31,156 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:12:31,156 - distributed.worker - INFO -          dashboard at:            127.0.0.1:37439
-2022-08-26 14:12:31,156 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:41595
-2022-08-26 14:12:31,157 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:31,157 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:31,157 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:31,157 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-3v8vv50o
-2022-08-26 14:12:31,157 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:31,157 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39483
-2022-08-26 14:12:31,157 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39483
-2022-08-26 14:12:31,157 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:12:31,157 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36549
-2022-08-26 14:12:31,157 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:41595
-2022-08-26 14:12:31,157 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:31,157 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:12:31,158 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:31,158 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-7ko7kk3k
-2022-08-26 14:12:31,158 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:31,160 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:44071', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:31,161 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:44071
-2022-08-26 14:12:31,161 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:31,161 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39483', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:31,161 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39483
-2022-08-26 14:12:31,161 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:31,162 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:41595
-2022-08-26 14:12:31,162 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:31,162 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:41595
-2022-08-26 14:12:31,162 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:31,162 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:31,162 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:31,176 - distributed.scheduler - INFO - Receive client connection: Client-cbfe4695-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:31,176 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:31,187 - distributed.scheduler - INFO - Remove client Client-cbfe4695-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:31,187 - distributed.scheduler - INFO - Remove client Client-cbfe4695-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:31,188 - distributed.scheduler - INFO - Close client connection: Client-cbfe4695-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:31,189 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:44071
-2022-08-26 14:12:31,189 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:39483
-2022-08-26 14:12:31,190 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:44071', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:31,190 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:44071
-2022-08-26 14:12:31,190 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-1007c57c-0d5c-4ad7-b4db-8af202898e23 Address tcp://127.0.0.1:44071 Status: Status.closing
-2022-08-26 14:12:31,190 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-a9b5478a-b97a-4a78-9482-070b93183b56 Address tcp://127.0.0.1:39483 Status: Status.closing
-2022-08-26 14:12:31,191 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:39483', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:31,191 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:39483
-2022-08-26 14:12:31,191 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:12:31,192 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:12:31,192 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_io_loop 2022-08-26 14:12:31,420 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:31,421 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:31,422 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:40609
-2022-08-26 14:12:31,422 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:35997
-2022-08-26 14:12:31,424 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38775
-2022-08-26 14:12:31,424 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38775
-2022-08-26 14:12:31,424 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41385
-2022-08-26 14:12:31,424 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:40609
-2022-08-26 14:12:31,425 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:31,425 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:12:31,425 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:31,425 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-nqx9002u
-2022-08-26 14:12:31,425 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:31,427 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38775', status: init, memory: 0, processing: 0>
-2022-08-26 14:12:31,427 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38775
-2022-08-26 14:12:31,427 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:31,427 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:40609
-2022-08-26 14:12:31,427 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:31,427 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:38775
-2022-08-26 14:12:31,428 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:31,428 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-d16566e4-afb4-4440-a4d3-0fa852716e5d Address tcp://127.0.0.1:38775 Status: Status.closing
-2022-08-26 14:12:31,429 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:38775', status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:31,429 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:38775
-2022-08-26 14:12:31,429 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:12:31,429 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:12:31,429 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_io_loop_alternate_loop 2022-08-26 14:12:31,660 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:31,662 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:31,662 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:43047
-2022-08-26 14:12:31,662 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:33881
-2022-08-26 14:12:31,666 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45807
-2022-08-26 14:12:31,666 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45807
-2022-08-26 14:12:31,666 - distributed.worker - INFO -          dashboard at:            127.0.0.1:38233
-2022-08-26 14:12:31,666 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43047
-2022-08-26 14:12:31,666 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:31,666 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:12:31,666 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:31,666 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-1_ftl59j
-2022-08-26 14:12:31,666 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:31,668 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45807', status: init, memory: 0, processing: 0>
-2022-08-26 14:12:31,669 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45807
-2022-08-26 14:12:31,669 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:31,669 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43047
-2022-08-26 14:12:31,669 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:31,669 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:45807
-2022-08-26 14:12:31,670 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:31,670 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-94c20ad4-517f-4709-b57c-74391ab36314 Address tcp://127.0.0.1:45807 Status: Status.closing
-2022-08-26 14:12:31,671 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:45807', status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:31,671 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:45807
-2022-08-26 14:12:31,671 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:12:31,672 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:12:31,672 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_access_key 2022-08-26 14:12:31,901 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:31,902 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:31,903 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:41831
-2022-08-26 14:12:31,903 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:46741
-2022-08-26 14:12:31,907 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45723
-2022-08-26 14:12:31,907 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45723
-2022-08-26 14:12:31,907 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:12:31,907 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41313
-2022-08-26 14:12:31,908 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:41831
-2022-08-26 14:12:31,908 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:31,908 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:31,908 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:31,908 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-10c9iuty
-2022-08-26 14:12:31,908 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:31,908 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45871
-2022-08-26 14:12:31,908 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45871
-2022-08-26 14:12:31,908 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:12:31,908 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39645
-2022-08-26 14:12:31,908 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:41831
-2022-08-26 14:12:31,909 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:31,909 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:12:31,909 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:31,909 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-4o9m_97e
-2022-08-26 14:12:31,909 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:31,911 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45723', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:31,912 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45723
-2022-08-26 14:12:31,912 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:31,912 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45871', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:31,912 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45871
-2022-08-26 14:12:31,913 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:31,913 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:41831
-2022-08-26 14:12:31,913 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:31,913 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:41831
-2022-08-26 14:12:31,913 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:31,913 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:31,913 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:31,927 - distributed.scheduler - INFO - Receive client connection: Client-cc70e57f-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:31,927 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:31,974 - distributed.scheduler - INFO - Remove client Client-cc70e57f-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:31,974 - distributed.scheduler - INFO - Remove client Client-cc70e57f-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:31,975 - distributed.scheduler - INFO - Close client connection: Client-cc70e57f-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:31,975 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:45723
-2022-08-26 14:12:31,975 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:45871
-2022-08-26 14:12:31,976 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:45723', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:31,976 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:45723
-2022-08-26 14:12:31,977 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:45871', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:31,977 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:45871
-2022-08-26 14:12:31,977 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:12:31,977 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-fe9a37b2-5f19-4afe-8da5-dc1b493455d0 Address tcp://127.0.0.1:45723 Status: Status.closing
-2022-08-26 14:12:31,977 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-07538c6f-eef4-4ec8-9f1b-df04901cc373 Address tcp://127.0.0.1:45871 Status: Status.closing
-2022-08-26 14:12:31,978 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:12:31,979 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_run_dask_worker 2022-08-26 14:12:32,208 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:32,210 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:32,210 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:33009
-2022-08-26 14:12:32,210 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:42779
-2022-08-26 14:12:32,215 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38897
-2022-08-26 14:12:32,215 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38897
-2022-08-26 14:12:32,215 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:12:32,215 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43655
-2022-08-26 14:12:32,215 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33009
-2022-08-26 14:12:32,215 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:32,215 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:32,215 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:32,215 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-vqmsk63r
-2022-08-26 14:12:32,215 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:32,216 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38457
-2022-08-26 14:12:32,216 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38457
-2022-08-26 14:12:32,216 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:12:32,216 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35281
-2022-08-26 14:12:32,216 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33009
-2022-08-26 14:12:32,216 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:32,216 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:12:32,216 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:32,216 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ep85ptnv
-2022-08-26 14:12:32,216 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:32,219 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38897', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:32,219 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38897
-2022-08-26 14:12:32,219 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:32,220 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38457', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:32,220 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38457
-2022-08-26 14:12:32,220 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:32,220 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33009
-2022-08-26 14:12:32,220 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:32,221 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33009
-2022-08-26 14:12:32,221 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:32,221 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:32,221 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:32,235 - distributed.scheduler - INFO - Receive client connection: Client-cc9fd4f6-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:32,235 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:32,238 - distributed.worker - INFO - Run out-of-band function 'f'
-2022-08-26 14:12:32,239 - distributed.worker - INFO - Run out-of-band function 'f'
-2022-08-26 14:12:32,247 - distributed.scheduler - INFO - Remove client Client-cc9fd4f6-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:32,247 - distributed.scheduler - INFO - Remove client Client-cc9fd4f6-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:32,247 - distributed.scheduler - INFO - Close client connection: Client-cc9fd4f6-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:32,247 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:38897
-2022-08-26 14:12:32,248 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:38457
-2022-08-26 14:12:32,249 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:38897', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:32,249 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:38897
-2022-08-26 14:12:32,249 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:38457', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:32,249 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:38457
-2022-08-26 14:12:32,249 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:12:32,249 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-568b95d1-4179-406c-b4c0-34fc9ea1f503 Address tcp://127.0.0.1:38897 Status: Status.closing
-2022-08-26 14:12:32,250 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-c9d3927f-a225-4a7e-b435-b26e01db3c64 Address tcp://127.0.0.1:38457 Status: Status.closing
-2022-08-26 14:12:32,250 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:12:32,250 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_run_dask_worker_kwonlyarg 2022-08-26 14:12:32,481 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:32,482 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:32,482 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:37533
-2022-08-26 14:12:32,482 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:46253
-2022-08-26 14:12:32,487 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34891
-2022-08-26 14:12:32,487 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34891
-2022-08-26 14:12:32,487 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:12:32,487 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43003
-2022-08-26 14:12:32,487 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37533
-2022-08-26 14:12:32,487 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:32,487 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:32,487 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:32,487 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-t6l4jzxd
-2022-08-26 14:12:32,487 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:32,488 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40327
-2022-08-26 14:12:32,488 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40327
-2022-08-26 14:12:32,488 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:12:32,488 - distributed.worker - INFO -          dashboard at:            127.0.0.1:38885
-2022-08-26 14:12:32,488 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37533
-2022-08-26 14:12:32,488 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:32,488 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:12:32,488 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:32,488 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-j4s71l5u
-2022-08-26 14:12:32,488 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:32,491 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:34891', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:32,491 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:34891
-2022-08-26 14:12:32,491 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:32,492 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40327', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:32,492 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40327
-2022-08-26 14:12:32,492 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:32,492 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37533
-2022-08-26 14:12:32,492 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:32,493 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37533
-2022-08-26 14:12:32,493 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:32,493 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:32,493 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:32,507 - distributed.scheduler - INFO - Receive client connection: Client-ccc95149-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:32,507 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:32,510 - distributed.worker - INFO - Run out-of-band function 'f'
-2022-08-26 14:12:32,510 - distributed.worker - INFO - Run out-of-band function 'f'
-2022-08-26 14:12:32,519 - distributed.scheduler - INFO - Remove client Client-ccc95149-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:32,519 - distributed.scheduler - INFO - Remove client Client-ccc95149-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:32,519 - distributed.scheduler - INFO - Close client connection: Client-ccc95149-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:32,519 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:34891
-2022-08-26 14:12:32,520 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:40327
-2022-08-26 14:12:32,521 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:34891', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:32,521 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:34891
-2022-08-26 14:12:32,521 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:40327', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:32,521 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:40327
-2022-08-26 14:12:32,521 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:12:32,521 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-de71d794-d1f3-4317-bcb8-09b26d2d96a6 Address tcp://127.0.0.1:34891 Status: Status.closing
-2022-08-26 14:12:32,521 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-cba56e87-0b7b-4998-a209-168c9471a233 Address tcp://127.0.0.1:40327 Status: Status.closing
-2022-08-26 14:12:32,522 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:12:32,522 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_run_coroutine_dask_worker 2022-08-26 14:12:32,752 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:32,753 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:32,754 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:35015
-2022-08-26 14:12:32,754 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:33455
-2022-08-26 14:12:32,758 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34735
-2022-08-26 14:12:32,758 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34735
-2022-08-26 14:12:32,758 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:12:32,758 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46807
-2022-08-26 14:12:32,758 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35015
-2022-08-26 14:12:32,758 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:32,758 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:32,758 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:32,759 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-z8_6u20s
-2022-08-26 14:12:32,759 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:32,759 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42349
-2022-08-26 14:12:32,759 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42349
-2022-08-26 14:12:32,759 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:12:32,759 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36903
-2022-08-26 14:12:32,759 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35015
-2022-08-26 14:12:32,759 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:32,759 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:12:32,759 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:32,759 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-dwfhjpve
-2022-08-26 14:12:32,760 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:32,762 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:34735', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:32,763 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:34735
-2022-08-26 14:12:32,763 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:32,763 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42349', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:32,763 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42349
-2022-08-26 14:12:32,763 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:32,764 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35015
-2022-08-26 14:12:32,764 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:32,764 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35015
-2022-08-26 14:12:32,764 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:32,764 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:32,764 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:32,778 - distributed.scheduler - INFO - Receive client connection: Client-ccf2b783-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:32,778 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:32,782 - distributed.worker - INFO - Run out-of-band function 'f'
-2022-08-26 14:12:32,782 - distributed.worker - INFO - Run out-of-band function 'f'
-2022-08-26 14:12:32,790 - distributed.scheduler - INFO - Remove client Client-ccf2b783-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:32,790 - distributed.scheduler - INFO - Remove client Client-ccf2b783-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:32,790 - distributed.scheduler - INFO - Close client connection: Client-ccf2b783-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:32,790 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:34735
-2022-08-26 14:12:32,791 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42349
-2022-08-26 14:12:32,792 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:34735', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:32,792 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:34735
-2022-08-26 14:12:32,792 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:42349', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:32,792 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:42349
-2022-08-26 14:12:32,792 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:12:32,792 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-9ae88af1-ced5-4bfd-89db-53f3d3afc9e2 Address tcp://127.0.0.1:34735 Status: Status.closing
-2022-08-26 14:12:32,792 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-9da0d3a6-bade-4449-9813-7eb2fd6a0c80 Address tcp://127.0.0.1:42349 Status: Status.closing
-2022-08-26 14:12:32,793 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:12:32,793 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_Executor 2022-08-26 14:12:33,023 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:33,025 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:33,025 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:34643
-2022-08-26 14:12:33,025 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:34401
-2022-08-26 14:12:33,028 - distributed.scheduler - INFO - Receive client connection: Client-cd18dd9e-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:33,028 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:33,031 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:46849
-2022-08-26 14:12:33,031 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:46849
-2022-08-26 14:12:33,031 - distributed.worker - INFO -          dashboard at:            127.0.0.1:37077
-2022-08-26 14:12:33,031 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34643
-2022-08-26 14:12:33,031 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:33,031 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:12:33,032 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:33,032 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-3txz8vr7
-2022-08-26 14:12:33,032 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:33,034 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:46849', status: init, memory: 0, processing: 0>
-2022-08-26 14:12:33,034 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:46849
-2022-08-26 14:12:33,034 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:33,034 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34643
-2022-08-26 14:12:33,034 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:33,035 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:33,044 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:46849
-2022-08-26 14:12:33,044 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:46849', status: closing, memory: 1, processing: 0>
-2022-08-26 14:12:33,045 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:46849
-2022-08-26 14:12:33,045 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:12:33,045 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-be2a709d-f357-46da-a228-43a8fb5b2b2c Address tcp://127.0.0.1:46849 Status: Status.closing
-2022-08-26 14:12:33,050 - distributed.scheduler - INFO - Remove client Client-cd18dd9e-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:33,051 - distributed.scheduler - INFO - Remove client Client-cd18dd9e-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:33,051 - distributed.scheduler - INFO - Close client connection: Client-cd18dd9e-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:33,051 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:12:33,051 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_close_on_disconnect 2022-08-26 14:12:33,281 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:33,282 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:33,283 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:41481
-2022-08-26 14:12:33,283 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:43747
-2022-08-26 14:12:33,286 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:41671
-2022-08-26 14:12:33,286 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:41671
-2022-08-26 14:12:33,286 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:12:33,286 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34421
-2022-08-26 14:12:33,286 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:41481
-2022-08-26 14:12:33,286 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:33,286 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:33,286 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:33,286 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-yylvdix3
-2022-08-26 14:12:33,286 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:33,288 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:41671', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:33,288 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:41671
-2022-08-26 14:12:33,288 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:33,288 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:41481
-2022-08-26 14:12:33,289 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:33,289 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:33,299 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:12:33,300 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:12:33,300 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:41671', name: 0, status: running, memory: 0, processing: 0>
-2022-08-26 14:12:33,300 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:41671
-2022-08-26 14:12:33,300 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:12:33,300 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:41671
-2022-08-26 14:12:33,301 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-66d07aae-76f3-4a84-8d35-03a243ef4313 Address tcp://127.0.0.1:41671 Status: Status.closing
-2022-08-26 14:12:33,301 - distributed.batched - INFO - Batched Comm Closed <TCP (closed) Worker->Scheduler local=tcp://127.0.0.1:42768 remote=tcp://127.0.0.1:41481>
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/batched.py", line 115, in _background_send
-    nbytes = yield coro
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/tornado/gen.py", line 769, in run
-    value = future.result()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 269, in write
-    raise CommClosedError()
-distributed.comm.core.CommClosedError
-PASSED
-distributed/tests/test_worker.py::test_memory_limit_auto 2022-08-26 14:12:33,540 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:33,542 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:33,542 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:33181
-2022-08-26 14:12:33,542 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:35295
-2022-08-26 14:12:33,545 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:46083
-2022-08-26 14:12:33,545 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:46083
-2022-08-26 14:12:33,545 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46647
-2022-08-26 14:12:33,545 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33181
-2022-08-26 14:12:33,545 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:33,545 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:33,545 - distributed.worker - INFO -                Memory:                   5.24 GiB
-2022-08-26 14:12:33,545 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ow45m_cw
-2022-08-26 14:12:33,545 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:33,547 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:46083', status: init, memory: 0, processing: 0>
-2022-08-26 14:12:33,547 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:46083
-2022-08-26 14:12:33,547 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:33,547 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33181
-2022-08-26 14:12:33,548 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:33,550 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40375
-2022-08-26 14:12:33,550 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40375
-2022-08-26 14:12:33,550 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34637
-2022-08-26 14:12:33,550 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33181
-2022-08-26 14:12:33,550 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:33,550 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:12:33,550 - distributed.worker - INFO -                Memory:                  10.47 GiB
-2022-08-26 14:12:33,550 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-hizpl8mv
-2022-08-26 14:12:33,550 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:33,551 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:33,552 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40375', status: init, memory: 0, processing: 0>
-2022-08-26 14:12:33,553 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40375
-2022-08-26 14:12:33,553 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:33,553 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33181
-2022-08-26 14:12:33,553 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:33,556 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38895
-2022-08-26 14:12:33,556 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38895
-2022-08-26 14:12:33,556 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39931
-2022-08-26 14:12:33,556 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33181
-2022-08-26 14:12:33,556 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:33,556 - distributed.worker - INFO -               Threads:                        100
-2022-08-26 14:12:33,556 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:33,556 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-3drj2ygx
-2022-08-26 14:12:33,556 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:33,556 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:33,558 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38895', status: init, memory: 0, processing: 0>
-2022-08-26 14:12:33,558 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38895
-2022-08-26 14:12:33,558 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:33,559 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33181
-2022-08-26 14:12:33,559 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:33,561 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37395
-2022-08-26 14:12:33,561 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37395
-2022-08-26 14:12:33,561 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42755
-2022-08-26 14:12:33,561 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33181
-2022-08-26 14:12:33,561 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:33,562 - distributed.worker - INFO -               Threads:                        200
-2022-08-26 14:12:33,562 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:33,562 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ijjrf6nb
-2022-08-26 14:12:33,562 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:33,562 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:33,564 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37395', status: init, memory: 0, processing: 0>
-2022-08-26 14:12:33,564 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37395
-2022-08-26 14:12:33,564 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:33,564 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33181
-2022-08-26 14:12:33,564 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:33,564 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:37395
-2022-08-26 14:12:33,565 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:33,565 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-5c1939ac-79f5-463f-9bdf-cb7dd4efddd7 Address tcp://127.0.0.1:37395 Status: Status.closing
-2022-08-26 14:12:33,566 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:37395', status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:33,566 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:37395
-2022-08-26 14:12:33,566 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:38895
-2022-08-26 14:12:33,567 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:38895', status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:33,567 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:38895
-2022-08-26 14:12:33,567 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-72dbf9b3-e046-4aff-9d0a-141b0ac97646 Address tcp://127.0.0.1:38895 Status: Status.closing
-2022-08-26 14:12:33,567 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:40375
-2022-08-26 14:12:33,568 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:40375', status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:33,568 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:40375
-2022-08-26 14:12:33,568 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-bac0b409-fad3-48f0-a022-767150ace327 Address tcp://127.0.0.1:40375 Status: Status.closing
-2022-08-26 14:12:33,569 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:46083
-2022-08-26 14:12:33,569 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:46083', status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:33,570 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:46083
-2022-08-26 14:12:33,570 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:12:33,570 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-b69f99b1-b785-4081-b4c7-0d7e7505c994 Address tcp://127.0.0.1:46083 Status: Status.closing
-2022-08-26 14:12:33,570 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:12:33,570 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_inter_worker_communication 2022-08-26 14:12:33,800 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:33,802 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:33,802 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:40975
-2022-08-26 14:12:33,802 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:33411
-2022-08-26 14:12:33,807 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35153
-2022-08-26 14:12:33,807 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35153
-2022-08-26 14:12:33,807 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:12:33,807 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33185
-2022-08-26 14:12:33,807 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:40975
-2022-08-26 14:12:33,807 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:33,807 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:33,807 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:33,807 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-b1uq6enb
-2022-08-26 14:12:33,807 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:33,807 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35539
-2022-08-26 14:12:33,808 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35539
-2022-08-26 14:12:33,808 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:12:33,808 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35481
-2022-08-26 14:12:33,808 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:40975
-2022-08-26 14:12:33,808 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:33,808 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:12:33,808 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:33,808 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-opllzr7m
-2022-08-26 14:12:33,808 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:33,811 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:35153', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:33,811 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:35153
-2022-08-26 14:12:33,811 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:33,811 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:35539', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:33,812 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:35539
-2022-08-26 14:12:33,812 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:33,812 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:40975
-2022-08-26 14:12:33,812 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:33,812 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:40975
-2022-08-26 14:12:33,812 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:33,813 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:33,813 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:33,826 - distributed.scheduler - INFO - Receive client connection: Client-cd92b051-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:33,827 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:33,849 - distributed.scheduler - INFO - Remove client Client-cd92b051-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:33,850 - distributed.scheduler - INFO - Remove client Client-cd92b051-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:33,850 - distributed.scheduler - INFO - Close client connection: Client-cd92b051-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:33,852 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:35153
-2022-08-26 14:12:33,852 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:35539
-2022-08-26 14:12:33,853 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-f238ee66-3369-40e5-9018-7f649847d0d2 Address tcp://127.0.0.1:35153 Status: Status.closing
-2022-08-26 14:12:33,853 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-07ee95d0-390d-451f-b4c8-bd6945091fec Address tcp://127.0.0.1:35539 Status: Status.closing
-2022-08-26 14:12:33,854 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:35153', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:33,854 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:35153
-2022-08-26 14:12:33,854 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:35539', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:33,854 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:35539
-2022-08-26 14:12:33,854 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:12:33,855 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:12:33,855 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_clean 2022-08-26 14:12:34,086 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:34,087 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:34,088 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:41883
-2022-08-26 14:12:34,088 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:38117
-2022-08-26 14:12:34,092 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37357
-2022-08-26 14:12:34,092 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37357
-2022-08-26 14:12:34,092 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:12:34,092 - distributed.worker - INFO -          dashboard at:            127.0.0.1:37783
-2022-08-26 14:12:34,092 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:41883
-2022-08-26 14:12:34,092 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:34,092 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:34,093 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:34,093 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-8ouru64a
-2022-08-26 14:12:34,093 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:34,093 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37985
-2022-08-26 14:12:34,093 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37985
-2022-08-26 14:12:34,093 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:12:34,093 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43345
-2022-08-26 14:12:34,093 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:41883
-2022-08-26 14:12:34,093 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:34,093 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:12:34,093 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:34,094 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-xslgbvra
-2022-08-26 14:12:34,094 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:34,096 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37357', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:34,097 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37357
-2022-08-26 14:12:34,097 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:34,097 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37985', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:34,097 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37985
-2022-08-26 14:12:34,098 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:34,098 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:41883
-2022-08-26 14:12:34,098 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:34,098 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:41883
-2022-08-26 14:12:34,098 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:34,098 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:34,098 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:34,112 - distributed.scheduler - INFO - Receive client connection: Client-cdbe4c85-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:34,112 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:34,146 - distributed.scheduler - INFO - Remove client Client-cdbe4c85-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:34,146 - distributed.scheduler - INFO - Remove client Client-cdbe4c85-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:34,147 - distributed.scheduler - INFO - Close client connection: Client-cdbe4c85-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:34,147 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:37357
-2022-08-26 14:12:34,147 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:37985
-2022-08-26 14:12:34,148 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:37357', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:34,148 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:37357
-2022-08-26 14:12:34,149 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:37985', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:34,149 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:37985
-2022-08-26 14:12:34,149 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:12:34,149 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-1218ddb1-5e02-434a-ae6c-46cda9048c23 Address tcp://127.0.0.1:37357 Status: Status.closing
-2022-08-26 14:12:34,149 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-a07d0531-73ac-441f-8591-09f640d28fb3 Address tcp://127.0.0.1:37985 Status: Status.closing
-2022-08-26 14:12:34,150 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:12:34,150 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_message_breakup 2022-08-26 14:12:34,381 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:34,382 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:34,383 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:36693
-2022-08-26 14:12:34,383 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:41869
-2022-08-26 14:12:34,387 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:32967
-2022-08-26 14:12:34,387 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:32967
-2022-08-26 14:12:34,387 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:12:34,387 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43233
-2022-08-26 14:12:34,387 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36693
-2022-08-26 14:12:34,387 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:34,387 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:34,387 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:34,388 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-q4a3oei9
-2022-08-26 14:12:34,388 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:34,388 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35693
-2022-08-26 14:12:34,388 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35693
-2022-08-26 14:12:34,388 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:12:34,388 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41847
-2022-08-26 14:12:34,388 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36693
-2022-08-26 14:12:34,388 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:34,388 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:12:34,388 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:34,388 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-vf0btw8c
-2022-08-26 14:12:34,389 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:34,391 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:32967', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:34,392 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:32967
-2022-08-26 14:12:34,392 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:34,392 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:35693', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:34,392 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:35693
-2022-08-26 14:12:34,392 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:34,393 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36693
-2022-08-26 14:12:34,393 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:34,393 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36693
-2022-08-26 14:12:34,393 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:34,393 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:34,393 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:34,407 - distributed.scheduler - INFO - Receive client connection: Client-cdeb49a6-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:34,407 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:34,490 - distributed.scheduler - INFO - Remove client Client-cdeb49a6-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:34,490 - distributed.scheduler - INFO - Remove client Client-cdeb49a6-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:34,491 - distributed.scheduler - INFO - Close client connection: Client-cdeb49a6-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:34,491 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:32967
-2022-08-26 14:12:34,491 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:35693
-2022-08-26 14:12:34,492 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:32967', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:34,492 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:32967
-2022-08-26 14:12:34,492 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:35693', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:34,493 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:35693
-2022-08-26 14:12:34,493 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:12:34,493 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-fe6dd2f2-f83f-45e1-88c5-ad6fed68d052 Address tcp://127.0.0.1:32967 Status: Status.closing
-2022-08-26 14:12:34,493 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-a6a98a16-9d86-4b72-b162-8cad4aee3cd6 Address tcp://127.0.0.1:35693 Status: Status.closing
-2022-08-26 14:12:34,494 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:12:34,495 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_types 2022-08-26 14:12:34,725 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:34,727 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:34,727 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:37779
-2022-08-26 14:12:34,727 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:35811
-2022-08-26 14:12:34,732 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:44799
-2022-08-26 14:12:34,732 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:44799
-2022-08-26 14:12:34,732 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:12:34,732 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34483
-2022-08-26 14:12:34,732 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37779
-2022-08-26 14:12:34,732 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:34,732 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:34,732 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:34,732 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-snf90_6w
-2022-08-26 14:12:34,732 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:34,733 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37513
-2022-08-26 14:12:34,733 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37513
-2022-08-26 14:12:34,733 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:12:34,733 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40505
-2022-08-26 14:12:34,733 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37779
-2022-08-26 14:12:34,733 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:34,733 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:12:34,733 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:34,733 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-9pqm0qbp
-2022-08-26 14:12:34,733 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:34,736 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:44799', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:34,736 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:44799
-2022-08-26 14:12:34,736 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:34,737 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37513', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:34,737 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37513
-2022-08-26 14:12:34,737 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:34,737 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37779
-2022-08-26 14:12:34,737 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:34,737 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37779
-2022-08-26 14:12:34,738 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:34,738 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:34,738 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:34,752 - distributed.scheduler - INFO - Receive client connection: Client-ce1fdeab-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:34,752 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:34,780 - distributed.scheduler - INFO - Client Client-ce1fdeab-2583-11ed-a99d-00d861bc4509 requests to cancel 1 keys
-2022-08-26 14:12:34,780 - distributed.scheduler - INFO - Scheduler cancels key inc-64456584d8a2e7176e2cd177efaa15f2.  Force=False
-2022-08-26 14:12:34,784 - distributed.scheduler - INFO - Remove client Client-ce1fdeab-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:34,784 - distributed.scheduler - INFO - Remove client Client-ce1fdeab-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:34,785 - distributed.scheduler - INFO - Close client connection: Client-ce1fdeab-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:34,785 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:44799
-2022-08-26 14:12:34,786 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:37513
-2022-08-26 14:12:34,786 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:37513', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:34,787 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:37513
-2022-08-26 14:12:34,787 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-b8ec6859-debe-4ce8-92c7-27b3638dbf73 Address tcp://127.0.0.1:37513 Status: Status.closing
-2022-08-26 14:12:34,787 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-220c5722-c747-41e1-a219-b1ef90b4fd99 Address tcp://127.0.0.1:44799 Status: Status.closing
-2022-08-26 14:12:34,788 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:44799', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:34,788 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:44799
-2022-08-26 14:12:34,788 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:12:34,789 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:12:34,789 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_system_monitor 2022-08-26 14:12:35,020 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:35,022 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:35,022 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:41545
-2022-08-26 14:12:35,022 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:41721
-2022-08-26 14:12:35,026 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39603
-2022-08-26 14:12:35,026 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39603
-2022-08-26 14:12:35,026 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:12:35,027 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46855
-2022-08-26 14:12:35,027 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:41545
-2022-08-26 14:12:35,027 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:35,027 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:35,027 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:35,027 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-uzqlwtgo
-2022-08-26 14:12:35,027 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:35,027 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35089
-2022-08-26 14:12:35,027 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35089
-2022-08-26 14:12:35,027 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:12:35,027 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34207
-2022-08-26 14:12:35,028 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:41545
-2022-08-26 14:12:35,028 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:35,028 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:12:35,028 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:35,028 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-cymepgux
-2022-08-26 14:12:35,028 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:35,031 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39603', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:35,031 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39603
-2022-08-26 14:12:35,031 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:35,031 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:35089', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:35,032 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:35089
-2022-08-26 14:12:35,032 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:35,032 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:41545
-2022-08-26 14:12:35,032 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:35,032 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:41545
-2022-08-26 14:12:35,032 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:35,033 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:35,033 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:35,044 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:39603
-2022-08-26 14:12:35,045 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:35089
-2022-08-26 14:12:35,046 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:39603', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:35,046 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:39603
-2022-08-26 14:12:35,046 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:35089', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:35,046 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:35089
-2022-08-26 14:12:35,046 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:12:35,046 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-d06cff93-671a-482b-abe2-f65c0fe52e4f Address tcp://127.0.0.1:39603 Status: Status.closing
-2022-08-26 14:12:35,046 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-64fb7a8b-34cd-4a3d-8d50-32838497dd1c Address tcp://127.0.0.1:35089 Status: Status.closing
-2022-08-26 14:12:35,047 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:12:35,047 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_restrictions 2022-08-26 14:12:35,277 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:35,278 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:35,279 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:40587
-2022-08-26 14:12:35,279 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:42473
-2022-08-26 14:12:35,283 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45355
-2022-08-26 14:12:35,283 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45355
-2022-08-26 14:12:35,283 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:12:35,283 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39423
-2022-08-26 14:12:35,283 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:40587
-2022-08-26 14:12:35,284 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:35,284 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:12:35,284 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:35,284 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-b_tby4kv
-2022-08-26 14:12:35,284 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:35,284 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43423
-2022-08-26 14:12:35,284 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43423
-2022-08-26 14:12:35,284 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:12:35,284 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34367
-2022-08-26 14:12:35,284 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:40587
-2022-08-26 14:12:35,284 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:35,285 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:35,285 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:35,285 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-don1gk3x
-2022-08-26 14:12:35,285 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:35,288 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45355', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:35,288 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45355
-2022-08-26 14:12:35,288 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:35,288 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:43423', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:35,289 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:43423
-2022-08-26 14:12:35,289 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:35,289 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:40587
-2022-08-26 14:12:35,289 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:35,289 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:40587
-2022-08-26 14:12:35,289 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:35,289 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:35,290 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:35,303 - distributed.scheduler - INFO - Receive client connection: Client-ce740c22-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:35,303 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:35,319 - distributed.scheduler - INFO - Client Client-ce740c22-2583-11ed-a99d-00d861bc4509 requests to cancel 1 keys
-2022-08-26 14:12:35,319 - distributed.scheduler - INFO - Scheduler cancels key inc-03d935909bba38f9a49655e867cbf56a.  Force=False
-2022-08-26 14:12:35,326 - distributed.scheduler - INFO - Remove client Client-ce740c22-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:35,326 - distributed.scheduler - INFO - Remove client Client-ce740c22-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:35,326 - distributed.scheduler - INFO - Close client connection: Client-ce740c22-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:35,327 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:45355
-2022-08-26 14:12:35,327 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:43423
-2022-08-26 14:12:35,328 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:45355', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:35,328 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:45355
-2022-08-26 14:12:35,328 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:43423', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:35,328 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:43423
-2022-08-26 14:12:35,328 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:12:35,328 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-e0b0a6f7-7202-417e-b8ef-ffd4f042c0a9 Address tcp://127.0.0.1:45355 Status: Status.closing
-2022-08-26 14:12:35,329 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-a8cfca32-dfb6-403f-b2fb-4da8f835cd27 Address tcp://127.0.0.1:43423 Status: Status.closing
-2022-08-26 14:12:35,329 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:12:35,330 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_clean_nbytes 2022-08-26 14:12:35,560 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:35,561 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:35,561 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:34541
-2022-08-26 14:12:35,561 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:39399
-2022-08-26 14:12:35,566 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:41485
-2022-08-26 14:12:35,566 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:41485
-2022-08-26 14:12:35,566 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:12:35,566 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43657
-2022-08-26 14:12:35,566 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34541
-2022-08-26 14:12:35,566 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:35,566 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:35,566 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:35,566 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-d3r0124b
-2022-08-26 14:12:35,566 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:35,567 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:44073
-2022-08-26 14:12:35,567 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:44073
-2022-08-26 14:12:35,567 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:12:35,567 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46805
-2022-08-26 14:12:35,567 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34541
-2022-08-26 14:12:35,567 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:35,567 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:12:35,567 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:35,567 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-sxjzetby
-2022-08-26 14:12:35,567 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:35,570 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:41485', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:35,571 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:41485
-2022-08-26 14:12:35,571 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:35,571 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:44073', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:35,571 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:44073
-2022-08-26 14:12:35,571 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:35,572 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34541
-2022-08-26 14:12:35,572 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:35,572 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34541
-2022-08-26 14:12:35,572 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:35,572 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:35,572 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:35,586 - distributed.scheduler - INFO - Receive client connection: Client-ce9f33e6-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:35,586 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:36,682 - distributed.scheduler - INFO - Remove client Client-ce9f33e6-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:36,683 - distributed.scheduler - INFO - Remove client Client-ce9f33e6-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:36,683 - distributed.scheduler - INFO - Close client connection: Client-ce9f33e6-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:36,683 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:41485
-2022-08-26 14:12:36,684 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:44073
-2022-08-26 14:12:36,684 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:41485', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:36,685 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:41485
-2022-08-26 14:12:36,685 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:44073', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:36,685 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:44073
-2022-08-26 14:12:36,685 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:12:36,685 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-d64ef49b-4ee5-4049-b9a8-3de2e8461a2f Address tcp://127.0.0.1:41485 Status: Status.closing
-2022-08-26 14:12:36,685 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-9621e9ce-ba08-45ad-8ce9-758fd84f62f6 Address tcp://127.0.0.1:44073 Status: Status.closing
-2022-08-26 14:12:36,687 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:12:36,687 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_gather_many_small[True] 2022-08-26 14:12:36,919 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:36,920 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:36,920 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:43261
-2022-08-26 14:12:36,920 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:46095
-2022-08-26 14:12:36,960 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45751
-2022-08-26 14:12:36,960 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45751
-2022-08-26 14:12:36,961 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:12:36,961 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35445
-2022-08-26 14:12:36,961 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43261
-2022-08-26 14:12:36,961 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:36,961 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:36,961 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:36,961 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-g_uuqxkd
-2022-08-26 14:12:36,961 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:36,962 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45843
-2022-08-26 14:12:36,962 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45843
-2022-08-26 14:12:36,962 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:12:36,962 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36443
-2022-08-26 14:12:36,962 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43261
-2022-08-26 14:12:36,962 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:36,962 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:36,962 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:36,962 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-lp1ox8gm
-2022-08-26 14:12:36,962 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:36,963 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40643
-2022-08-26 14:12:36,963 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40643
-2022-08-26 14:12:36,963 - distributed.worker - INFO -           Worker name:                          2
-2022-08-26 14:12:36,963 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40813
-2022-08-26 14:12:36,963 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43261
-2022-08-26 14:12:36,963 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:36,963 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:36,963 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:36,963 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-rw0x2t76
-2022-08-26 14:12:36,964 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:36,964 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43005
-2022-08-26 14:12:36,964 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43005
-2022-08-26 14:12:36,964 - distributed.worker - INFO -           Worker name:                          3
-2022-08-26 14:12:36,964 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42939
-2022-08-26 14:12:36,964 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43261
-2022-08-26 14:12:36,964 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:36,965 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:36,965 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:36,965 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-vri801fc
-2022-08-26 14:12:36,965 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:36,965 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38669
-2022-08-26 14:12:36,965 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38669
-2022-08-26 14:12:36,965 - distributed.worker - INFO -           Worker name:                          4
-2022-08-26 14:12:36,966 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46015
-2022-08-26 14:12:36,966 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43261
-2022-08-26 14:12:36,966 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:36,966 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:36,966 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:36,966 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-omqk18bo
-2022-08-26 14:12:36,966 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:36,966 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42613
-2022-08-26 14:12:36,967 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42613
-2022-08-26 14:12:36,967 - distributed.worker - INFO -           Worker name:                          5
-2022-08-26 14:12:36,967 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41839
-2022-08-26 14:12:36,967 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43261
-2022-08-26 14:12:36,967 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:36,967 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:36,967 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:36,967 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-rlodzdwf
-2022-08-26 14:12:36,967 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:36,968 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35671
-2022-08-26 14:12:36,968 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35671
-2022-08-26 14:12:36,968 - distributed.worker - INFO -           Worker name:                          6
-2022-08-26 14:12:36,968 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34197
-2022-08-26 14:12:36,968 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43261
-2022-08-26 14:12:36,968 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:36,968 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:36,968 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:36,968 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-28633beo
-2022-08-26 14:12:36,968 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:36,969 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43553
-2022-08-26 14:12:36,969 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43553
-2022-08-26 14:12:36,969 - distributed.worker - INFO -           Worker name:                          7
-2022-08-26 14:12:36,969 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41517
-2022-08-26 14:12:36,969 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43261
-2022-08-26 14:12:36,969 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:36,969 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:36,969 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:36,969 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-jyxfy6dx
-2022-08-26 14:12:36,970 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:36,970 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37999
-2022-08-26 14:12:36,970 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37999
-2022-08-26 14:12:36,970 - distributed.worker - INFO -           Worker name:                          8
-2022-08-26 14:12:36,970 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41923
-2022-08-26 14:12:36,970 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43261
-2022-08-26 14:12:36,970 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:36,971 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:36,971 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:36,971 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-127atloz
-2022-08-26 14:12:36,971 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:36,971 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39753
-2022-08-26 14:12:36,971 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39753
-2022-08-26 14:12:36,971 - distributed.worker - INFO -           Worker name:                          9
-2022-08-26 14:12:36,971 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45529
-2022-08-26 14:12:36,972 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43261
-2022-08-26 14:12:36,972 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:36,972 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:36,972 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:36,972 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-hkuve7sx
-2022-08-26 14:12:36,972 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:36,972 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37907
-2022-08-26 14:12:36,973 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37907
-2022-08-26 14:12:36,973 - distributed.worker - INFO -           Worker name:                         10
-2022-08-26 14:12:36,973 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46147
-2022-08-26 14:12:36,973 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43261
-2022-08-26 14:12:36,973 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:36,973 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:36,973 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:36,973 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-tw5tizuh
-2022-08-26 14:12:36,973 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:36,974 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:44135
-2022-08-26 14:12:36,974 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:44135
-2022-08-26 14:12:36,974 - distributed.worker - INFO -           Worker name:                         11
-2022-08-26 14:12:36,974 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34467
-2022-08-26 14:12:36,974 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43261
-2022-08-26 14:12:36,974 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:36,974 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:36,974 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:36,974 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-rdbj4n_k
-2022-08-26 14:12:36,974 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:36,975 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37259
-2022-08-26 14:12:36,975 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37259
-2022-08-26 14:12:36,975 - distributed.worker - INFO -           Worker name:                         12
-2022-08-26 14:12:36,975 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46255
-2022-08-26 14:12:36,975 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43261
-2022-08-26 14:12:36,975 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:36,975 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:36,975 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:36,975 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-jk75ewhc
-2022-08-26 14:12:36,975 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:36,976 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35889
-2022-08-26 14:12:36,976 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35889
-2022-08-26 14:12:36,976 - distributed.worker - INFO -           Worker name:                         13
-2022-08-26 14:12:36,976 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41423
-2022-08-26 14:12:36,976 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43261
-2022-08-26 14:12:36,976 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:36,976 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:36,977 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:36,977 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-q7kdrxn_
-2022-08-26 14:12:36,977 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:36,977 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36295
-2022-08-26 14:12:36,977 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36295
-2022-08-26 14:12:36,977 - distributed.worker - INFO -           Worker name:                         14
-2022-08-26 14:12:36,977 - distributed.worker - INFO -          dashboard at:            127.0.0.1:44471
-2022-08-26 14:12:36,978 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43261
-2022-08-26 14:12:36,978 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:36,978 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:36,978 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:36,978 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-mnew2vcb
-2022-08-26 14:12:36,978 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:36,979 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39515
-2022-08-26 14:12:36,979 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39515
-2022-08-26 14:12:36,979 - distributed.worker - INFO -           Worker name:                         15
-2022-08-26 14:12:36,979 - distributed.worker - INFO -          dashboard at:            127.0.0.1:37889
-2022-08-26 14:12:36,979 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43261
-2022-08-26 14:12:36,979 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:36,979 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:36,979 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:36,979 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-9ri4_mmh
-2022-08-26 14:12:36,980 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:36,980 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40795
-2022-08-26 14:12:36,980 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40795
-2022-08-26 14:12:36,980 - distributed.worker - INFO -           Worker name:                         16
-2022-08-26 14:12:36,980 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34447
-2022-08-26 14:12:36,980 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43261
-2022-08-26 14:12:36,980 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:36,980 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:36,981 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:36,981 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-birfb39t
-2022-08-26 14:12:36,981 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:36,981 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35063
-2022-08-26 14:12:36,981 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35063
-2022-08-26 14:12:36,981 - distributed.worker - INFO -           Worker name:                         17
-2022-08-26 14:12:36,981 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42263
-2022-08-26 14:12:36,982 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43261
-2022-08-26 14:12:36,982 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:36,982 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:36,982 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:36,982 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-tb8xuv7y
-2022-08-26 14:12:36,982 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:36,982 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37793
-2022-08-26 14:12:36,982 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37793
-2022-08-26 14:12:36,983 - distributed.worker - INFO -           Worker name:                         18
-2022-08-26 14:12:36,983 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39369
-2022-08-26 14:12:36,983 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43261
-2022-08-26 14:12:36,983 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:36,983 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:36,983 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:36,983 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ihvwaocu
-2022-08-26 14:12:36,983 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:36,984 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:44169
-2022-08-26 14:12:36,984 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:44169
-2022-08-26 14:12:36,984 - distributed.worker - INFO -           Worker name:                         19
-2022-08-26 14:12:36,984 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36595
-2022-08-26 14:12:36,984 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43261
-2022-08-26 14:12:36,984 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:36,984 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:36,984 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:36,984 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-14r2gqk7
-2022-08-26 14:12:36,984 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:36,985 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43757
-2022-08-26 14:12:36,985 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43757
-2022-08-26 14:12:36,985 - distributed.worker - INFO -           Worker name:                         20
-2022-08-26 14:12:36,985 - distributed.worker - INFO -          dashboard at:            127.0.0.1:37245
-2022-08-26 14:12:36,985 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43261
-2022-08-26 14:12:36,985 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:36,985 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:36,985 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:36,985 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-kl78w_gi
-2022-08-26 14:12:36,985 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:37,006 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45751', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:37,006 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45751
-2022-08-26 14:12:37,006 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:37,006 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45843', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:37,007 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45843
-2022-08-26 14:12:37,007 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:37,007 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40643', name: 2, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:37,007 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40643
-2022-08-26 14:12:37,007 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:37,008 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:43005', name: 3, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:37,008 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:43005
-2022-08-26 14:12:37,008 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:37,008 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38669', name: 4, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:37,009 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38669
-2022-08-26 14:12:37,009 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:37,009 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42613', name: 5, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:37,009 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42613
-2022-08-26 14:12:37,009 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:37,010 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:35671', name: 6, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:37,010 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:35671
-2022-08-26 14:12:37,010 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:37,010 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:43553', name: 7, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:37,011 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:43553
-2022-08-26 14:12:37,011 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:37,011 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37999', name: 8, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:37,011 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37999
-2022-08-26 14:12:37,011 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:37,012 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39753', name: 9, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:37,012 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39753
-2022-08-26 14:12:37,012 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:37,012 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37907', name: 10, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:37,013 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37907
-2022-08-26 14:12:37,013 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:37,013 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:44135', name: 11, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:37,013 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:44135
-2022-08-26 14:12:37,013 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:37,014 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37259', name: 12, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:37,014 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37259
-2022-08-26 14:12:37,014 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:37,014 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:35889', name: 13, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:37,015 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:35889
-2022-08-26 14:12:37,015 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:37,015 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36295', name: 14, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:37,015 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36295
-2022-08-26 14:12:37,015 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:37,016 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39515', name: 15, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:37,016 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39515
-2022-08-26 14:12:37,016 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:37,016 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40795', name: 16, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:37,017 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40795
-2022-08-26 14:12:37,017 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:37,017 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:35063', name: 17, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:37,017 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:35063
-2022-08-26 14:12:37,017 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:37,018 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37793', name: 18, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:37,018 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37793
-2022-08-26 14:12:37,018 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:37,019 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:44169', name: 19, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:37,019 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:44169
-2022-08-26 14:12:37,019 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:37,019 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:43757', name: 20, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:37,020 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:43757
-2022-08-26 14:12:37,020 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:37,020 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43261
-2022-08-26 14:12:37,021 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:37,021 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43261
-2022-08-26 14:12:37,021 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:37,021 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43261
-2022-08-26 14:12:37,021 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:37,021 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43261
-2022-08-26 14:12:37,022 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:37,022 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43261
-2022-08-26 14:12:37,022 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:37,022 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43261
-2022-08-26 14:12:37,022 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:37,022 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43261
-2022-08-26 14:12:37,022 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:37,023 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43261
-2022-08-26 14:12:37,023 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:37,023 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43261
-2022-08-26 14:12:37,023 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:37,023 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43261
-2022-08-26 14:12:37,023 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:37,024 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43261
-2022-08-26 14:12:37,024 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:37,024 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43261
-2022-08-26 14:12:37,024 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:37,024 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43261
-2022-08-26 14:12:37,024 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:37,025 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43261
-2022-08-26 14:12:37,025 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:37,025 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43261
-2022-08-26 14:12:37,025 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:37,025 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43261
-2022-08-26 14:12:37,025 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:37,025 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43261
-2022-08-26 14:12:37,026 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:37,026 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43261
-2022-08-26 14:12:37,026 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:37,026 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43261
-2022-08-26 14:12:37,026 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:37,026 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43261
-2022-08-26 14:12:37,026 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:37,027 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43261
-2022-08-26 14:12:37,027 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:37,028 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:37,028 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:37,028 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:37,028 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:37,028 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:37,028 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:37,028 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:37,028 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:37,029 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:37,029 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:37,029 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:37,029 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:37,029 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:37,029 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:37,029 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:37,029 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:37,029 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:37,029 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:37,029 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:37,029 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:37,029 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:37,044 - distributed.scheduler - INFO - Receive client connection: Client-cf7dafe1-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:37,045 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:37,157 - distributed.scheduler - INFO - Remove client Client-cf7dafe1-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:37,157 - distributed.scheduler - INFO - Remove client Client-cf7dafe1-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:37,157 - distributed.scheduler - INFO - Close client connection: Client-cf7dafe1-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:37,159 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:45751
-2022-08-26 14:12:37,160 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:45843
-2022-08-26 14:12:37,160 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:40643
-2022-08-26 14:12:37,161 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:43005
-2022-08-26 14:12:37,161 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:38669
-2022-08-26 14:12:37,161 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42613
-2022-08-26 14:12:37,161 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:35671
-2022-08-26 14:12:37,162 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:43553
-2022-08-26 14:12:37,162 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:37999
-2022-08-26 14:12:37,162 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:39753
-2022-08-26 14:12:37,163 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:37907
-2022-08-26 14:12:37,163 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:44135
-2022-08-26 14:12:37,163 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:37259
-2022-08-26 14:12:37,163 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:35889
-2022-08-26 14:12:37,164 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36295
-2022-08-26 14:12:37,164 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:39515
-2022-08-26 14:12:37,164 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:40795
-2022-08-26 14:12:37,165 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:35063
-2022-08-26 14:12:37,165 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:37793
-2022-08-26 14:12:37,165 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:44169
-2022-08-26 14:12:37,166 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:43757
-2022-08-26 14:12:37,171 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:45751', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:37,171 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:45751
-2022-08-26 14:12:37,171 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:45843', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:37,171 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:45843
-2022-08-26 14:12:37,171 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:40643', name: 2, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:37,172 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:40643
-2022-08-26 14:12:37,172 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:43005', name: 3, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:37,172 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:43005
-2022-08-26 14:12:37,172 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:38669', name: 4, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:37,172 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:38669
-2022-08-26 14:12:37,172 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:42613', name: 5, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:37,172 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:42613
-2022-08-26 14:12:37,172 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:35671', name: 6, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:37,172 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:35671
-2022-08-26 14:12:37,173 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:43553', name: 7, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:37,173 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:43553
-2022-08-26 14:12:37,173 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:37999', name: 8, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:37,173 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:37999
-2022-08-26 14:12:37,173 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:39753', name: 9, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:37,173 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:39753
-2022-08-26 14:12:37,173 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:37907', name: 10, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:37,173 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:37907
-2022-08-26 14:12:37,173 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:44135', name: 11, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:37,174 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:44135
-2022-08-26 14:12:37,174 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:37259', name: 12, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:37,174 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:37259
-2022-08-26 14:12:37,174 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:35889', name: 13, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:37,174 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:35889
-2022-08-26 14:12:37,174 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:36295', name: 14, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:37,174 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:36295
-2022-08-26 14:12:37,174 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:39515', name: 15, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:37,174 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:39515
-2022-08-26 14:12:37,175 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:40795', name: 16, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:37,175 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:40795
-2022-08-26 14:12:37,175 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:35063', name: 17, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:37,175 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:35063
-2022-08-26 14:12:37,175 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:37793', name: 18, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:37,175 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:37793
-2022-08-26 14:12:37,175 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:44169', name: 19, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:37,175 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:44169
-2022-08-26 14:12:37,175 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:43757', name: 20, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:37,175 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:43757
-2022-08-26 14:12:37,175 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:12:37,176 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-520cddda-2523-4cac-a432-d994d03e4b29 Address tcp://127.0.0.1:45751 Status: Status.closing
-2022-08-26 14:12:37,176 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-da4e1c0b-33c0-458d-9522-3cf23c4eb566 Address tcp://127.0.0.1:45843 Status: Status.closing
-2022-08-26 14:12:37,176 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-ff69e01a-e991-4efa-ae1d-a207d9824082 Address tcp://127.0.0.1:40643 Status: Status.closing
-2022-08-26 14:12:37,176 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-aaf85cef-4a19-44f3-9705-db92c1616881 Address tcp://127.0.0.1:43005 Status: Status.closing
-2022-08-26 14:12:37,177 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-f8543bde-497b-434b-9e12-ba5144958c10 Address tcp://127.0.0.1:38669 Status: Status.closing
-2022-08-26 14:12:37,177 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-190915f3-8a52-4583-83bb-7cde1bb23473 Address tcp://127.0.0.1:42613 Status: Status.closing
-2022-08-26 14:12:37,177 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-efb3bc88-bc45-477d-b05a-364b350c9ebf Address tcp://127.0.0.1:35671 Status: Status.closing
-2022-08-26 14:12:37,177 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-19c76829-e35b-4b56-9a2a-3450be62a53c Address tcp://127.0.0.1:43553 Status: Status.closing
-2022-08-26 14:12:37,177 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-3deebcb1-8637-44ed-9591-79789c2536af Address tcp://127.0.0.1:37999 Status: Status.closing
-2022-08-26 14:12:37,178 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-06f49ace-cd52-4662-af85-f54405fb5ac4 Address tcp://127.0.0.1:39753 Status: Status.closing
-2022-08-26 14:12:37,178 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-ac6a7533-64ea-4c63-b466-122f97ac99e2 Address tcp://127.0.0.1:37907 Status: Status.closing
-2022-08-26 14:12:37,178 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-e2b81fb8-24d0-469e-a781-1fe5ffac2db0 Address tcp://127.0.0.1:44135 Status: Status.closing
-2022-08-26 14:12:37,178 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-1d882f9c-731b-4bfa-a0b2-22abec347c42 Address tcp://127.0.0.1:37259 Status: Status.closing
-2022-08-26 14:12:37,178 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-d8fa4993-553d-4a3b-a203-62dcd8b01ff4 Address tcp://127.0.0.1:35889 Status: Status.closing
-2022-08-26 14:12:37,179 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-22ea7cd9-8cc9-4a80-a08e-2ae82fb65fa5 Address tcp://127.0.0.1:36295 Status: Status.closing
-2022-08-26 14:12:37,179 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-6acaca49-4eae-4040-a9cc-b66450ae45c9 Address tcp://127.0.0.1:39515 Status: Status.closing
-2022-08-26 14:12:37,179 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-59c864a7-106b-4beb-a90d-d6afb6a011aa Address tcp://127.0.0.1:40795 Status: Status.closing
-2022-08-26 14:12:37,179 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-a35e5c8b-da14-469c-a003-fd233c654524 Address tcp://127.0.0.1:35063 Status: Status.closing
-2022-08-26 14:12:37,180 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-a5d797c0-cef5-4216-8e35-7929dbfb2aee Address tcp://127.0.0.1:37793 Status: Status.closing
-2022-08-26 14:12:37,180 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-b51d3be4-a6d5-4844-b833-9f2a7d637744 Address tcp://127.0.0.1:44169 Status: Status.closing
-2022-08-26 14:12:37,180 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-9c18381a-c1ec-40ff-b7e2-f8b40cf97fc2 Address tcp://127.0.0.1:43757 Status: Status.closing
-2022-08-26 14:12:37,188 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:12:37,189 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_gather_many_small[False] 2022-08-26 14:12:37,429 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:37,431 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:37,431 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:39245
-2022-08-26 14:12:37,431 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:46561
-2022-08-26 14:12:37,471 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42251
-2022-08-26 14:12:37,471 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42251
-2022-08-26 14:12:37,471 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:12:37,471 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35843
-2022-08-26 14:12:37,471 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39245
-2022-08-26 14:12:37,471 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:37,471 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:37,472 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:37,472 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-yqev_879
-2022-08-26 14:12:37,472 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:37,472 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37755
-2022-08-26 14:12:37,472 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37755
-2022-08-26 14:12:37,472 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:12:37,473 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39657
-2022-08-26 14:12:37,473 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39245
-2022-08-26 14:12:37,473 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:37,473 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:37,473 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:37,473 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-_an7nwby
-2022-08-26 14:12:37,473 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:37,474 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36927
-2022-08-26 14:12:37,474 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36927
-2022-08-26 14:12:37,474 - distributed.worker - INFO -           Worker name:                          2
-2022-08-26 14:12:37,474 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39625
-2022-08-26 14:12:37,474 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39245
-2022-08-26 14:12:37,474 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:37,474 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:37,474 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:37,474 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-zzc2uued
-2022-08-26 14:12:37,475 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:37,475 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40815
-2022-08-26 14:12:37,475 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40815
-2022-08-26 14:12:37,475 - distributed.worker - INFO -           Worker name:                          3
-2022-08-26 14:12:37,475 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42943
-2022-08-26 14:12:37,475 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39245
-2022-08-26 14:12:37,476 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:37,476 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:37,476 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:37,476 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-s_fgkiqr
-2022-08-26 14:12:37,476 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:37,476 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40413
-2022-08-26 14:12:37,477 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40413
-2022-08-26 14:12:37,477 - distributed.worker - INFO -           Worker name:                          4
-2022-08-26 14:12:37,477 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43773
-2022-08-26 14:12:37,477 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39245
-2022-08-26 14:12:37,477 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:37,477 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:37,477 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:37,477 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-y0350s5y
-2022-08-26 14:12:37,477 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:37,478 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45561
-2022-08-26 14:12:37,478 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45561
-2022-08-26 14:12:37,478 - distributed.worker - INFO -           Worker name:                          5
-2022-08-26 14:12:37,478 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39675
-2022-08-26 14:12:37,478 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39245
-2022-08-26 14:12:37,478 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:37,478 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:37,478 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:37,479 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-4aw6hxht
-2022-08-26 14:12:37,479 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:37,479 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37077
-2022-08-26 14:12:37,479 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37077
-2022-08-26 14:12:37,479 - distributed.worker - INFO -           Worker name:                          6
-2022-08-26 14:12:37,479 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40705
-2022-08-26 14:12:37,480 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39245
-2022-08-26 14:12:37,480 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:37,480 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:37,480 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:37,480 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-jars6o0o
-2022-08-26 14:12:37,480 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:37,481 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45181
-2022-08-26 14:12:37,481 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45181
-2022-08-26 14:12:37,481 - distributed.worker - INFO -           Worker name:                          7
-2022-08-26 14:12:37,481 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36577
-2022-08-26 14:12:37,481 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39245
-2022-08-26 14:12:37,481 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:37,481 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:37,481 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:37,481 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-2r3ygswj
-2022-08-26 14:12:37,481 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:37,482 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45091
-2022-08-26 14:12:37,482 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45091
-2022-08-26 14:12:37,482 - distributed.worker - INFO -           Worker name:                          8
-2022-08-26 14:12:37,482 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34773
-2022-08-26 14:12:37,482 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39245
-2022-08-26 14:12:37,482 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:37,483 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:37,483 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:37,483 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-e7qjc8rx
-2022-08-26 14:12:37,483 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:37,483 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33463
-2022-08-26 14:12:37,483 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33463
-2022-08-26 14:12:37,484 - distributed.worker - INFO -           Worker name:                          9
-2022-08-26 14:12:37,484 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40581
-2022-08-26 14:12:37,484 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39245
-2022-08-26 14:12:37,484 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:37,484 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:37,484 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:37,484 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-8_glm8nh
-2022-08-26 14:12:37,484 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:37,485 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40841
-2022-08-26 14:12:37,485 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40841
-2022-08-26 14:12:37,485 - distributed.worker - INFO -           Worker name:                         10
-2022-08-26 14:12:37,485 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42491
-2022-08-26 14:12:37,485 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39245
-2022-08-26 14:12:37,485 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:37,485 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:37,485 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:37,486 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-3d5is0o3
-2022-08-26 14:12:37,486 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:37,486 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33917
-2022-08-26 14:12:37,486 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33917
-2022-08-26 14:12:37,486 - distributed.worker - INFO -           Worker name:                         11
-2022-08-26 14:12:37,486 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33609
-2022-08-26 14:12:37,487 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39245
-2022-08-26 14:12:37,487 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:37,487 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:37,487 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:37,487 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-3bnlxhve
-2022-08-26 14:12:37,487 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:37,488 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38855
-2022-08-26 14:12:37,488 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38855
-2022-08-26 14:12:37,488 - distributed.worker - INFO -           Worker name:                         12
-2022-08-26 14:12:37,488 - distributed.worker - INFO -          dashboard at:            127.0.0.1:37539
-2022-08-26 14:12:37,488 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39245
-2022-08-26 14:12:37,488 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:37,488 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:37,488 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:37,488 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-e4ztl1yk
-2022-08-26 14:12:37,488 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:37,489 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39327
-2022-08-26 14:12:37,489 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39327
-2022-08-26 14:12:37,489 - distributed.worker - INFO -           Worker name:                         13
-2022-08-26 14:12:37,489 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46391
-2022-08-26 14:12:37,489 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39245
-2022-08-26 14:12:37,489 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:37,489 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:37,490 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:37,490 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-heriezu9
-2022-08-26 14:12:37,490 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:37,490 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37983
-2022-08-26 14:12:37,490 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37983
-2022-08-26 14:12:37,490 - distributed.worker - INFO -           Worker name:                         14
-2022-08-26 14:12:37,491 - distributed.worker - INFO -          dashboard at:            127.0.0.1:37451
-2022-08-26 14:12:37,491 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39245
-2022-08-26 14:12:37,491 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:37,491 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:37,491 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:37,491 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ap62u_go
-2022-08-26 14:12:37,491 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:37,492 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36729
-2022-08-26 14:12:37,492 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36729
-2022-08-26 14:12:37,492 - distributed.worker - INFO -           Worker name:                         15
-2022-08-26 14:12:37,492 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34179
-2022-08-26 14:12:37,492 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39245
-2022-08-26 14:12:37,492 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:37,492 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:37,492 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:37,492 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-jl9tvonz
-2022-08-26 14:12:37,493 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:37,493 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43807
-2022-08-26 14:12:37,493 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43807
-2022-08-26 14:12:37,493 - distributed.worker - INFO -           Worker name:                         16
-2022-08-26 14:12:37,493 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39975
-2022-08-26 14:12:37,493 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39245
-2022-08-26 14:12:37,494 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:37,494 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:37,494 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:37,494 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-qf5bwkoi
-2022-08-26 14:12:37,494 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:37,495 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36281
-2022-08-26 14:12:37,495 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36281
-2022-08-26 14:12:37,495 - distributed.worker - INFO -           Worker name:                         17
-2022-08-26 14:12:37,495 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43347
-2022-08-26 14:12:37,495 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39245
-2022-08-26 14:12:37,495 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:37,496 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:37,496 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:37,496 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-wvzyf11m
-2022-08-26 14:12:37,496 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:37,496 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36237
-2022-08-26 14:12:37,496 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36237
-2022-08-26 14:12:37,497 - distributed.worker - INFO -           Worker name:                         18
-2022-08-26 14:12:37,497 - distributed.worker - INFO -          dashboard at:            127.0.0.1:38555
-2022-08-26 14:12:37,497 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39245
-2022-08-26 14:12:37,497 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:37,497 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:37,497 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:37,497 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-_x10di0w
-2022-08-26 14:12:37,497 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:37,498 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:41859
-2022-08-26 14:12:37,498 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:41859
-2022-08-26 14:12:37,498 - distributed.worker - INFO -           Worker name:                         19
-2022-08-26 14:12:37,498 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46627
-2022-08-26 14:12:37,498 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39245
-2022-08-26 14:12:37,498 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:37,498 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:37,498 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:37,498 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-8nfh7uyo
-2022-08-26 14:12:37,499 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:37,499 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36539
-2022-08-26 14:12:37,499 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36539
-2022-08-26 14:12:37,499 - distributed.worker - INFO -           Worker name:                         20
-2022-08-26 14:12:37,499 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40901
-2022-08-26 14:12:37,499 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39245
-2022-08-26 14:12:37,500 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:37,500 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:37,500 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:37,500 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-oif8gvun
-2022-08-26 14:12:37,500 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:37,520 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42251', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:37,521 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42251
-2022-08-26 14:12:37,521 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:37,521 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37755', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:37,521 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37755
-2022-08-26 14:12:37,521 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:37,522 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36927', name: 2, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:37,522 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36927
-2022-08-26 14:12:37,522 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:37,522 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40815', name: 3, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:37,523 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40815
-2022-08-26 14:12:37,523 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:37,523 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40413', name: 4, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:37,523 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40413
-2022-08-26 14:12:37,523 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:37,524 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45561', name: 5, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:37,524 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45561
-2022-08-26 14:12:37,524 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:37,524 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37077', name: 6, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:37,525 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37077
-2022-08-26 14:12:37,525 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:37,525 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45181', name: 7, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:37,525 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45181
-2022-08-26 14:12:37,525 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:37,526 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45091', name: 8, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:37,526 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45091
-2022-08-26 14:12:37,526 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:37,526 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33463', name: 9, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:37,527 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33463
-2022-08-26 14:12:37,527 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:37,527 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40841', name: 10, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:37,527 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40841
-2022-08-26 14:12:37,527 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:37,528 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33917', name: 11, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:37,528 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33917
-2022-08-26 14:12:37,528 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:37,528 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38855', name: 12, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:37,529 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38855
-2022-08-26 14:12:37,529 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:37,529 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39327', name: 13, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:37,529 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39327
-2022-08-26 14:12:37,529 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:37,530 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37983', name: 14, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:37,530 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37983
-2022-08-26 14:12:37,530 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:37,530 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36729', name: 15, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:37,531 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36729
-2022-08-26 14:12:37,531 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:37,531 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:43807', name: 16, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:37,531 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:43807
-2022-08-26 14:12:37,531 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:37,532 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36281', name: 17, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:37,532 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36281
-2022-08-26 14:12:37,532 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:37,532 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36237', name: 18, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:37,533 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36237
-2022-08-26 14:12:37,533 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:37,533 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:41859', name: 19, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:37,533 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:41859
-2022-08-26 14:12:37,533 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:37,534 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36539', name: 20, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:37,534 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36539
-2022-08-26 14:12:37,534 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:37,535 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39245
-2022-08-26 14:12:37,535 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:37,535 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39245
-2022-08-26 14:12:37,535 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:37,536 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39245
-2022-08-26 14:12:37,536 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:37,536 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39245
-2022-08-26 14:12:37,536 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:37,536 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39245
-2022-08-26 14:12:37,537 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:37,537 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39245
-2022-08-26 14:12:37,537 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:37,537 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39245
-2022-08-26 14:12:37,537 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:37,537 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39245
-2022-08-26 14:12:37,538 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:37,538 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39245
-2022-08-26 14:12:37,538 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:37,538 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39245
-2022-08-26 14:12:37,538 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:37,538 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39245
-2022-08-26 14:12:37,539 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:37,539 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39245
-2022-08-26 14:12:37,539 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:37,539 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39245
-2022-08-26 14:12:37,539 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:37,540 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39245
-2022-08-26 14:12:37,540 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:37,540 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39245
-2022-08-26 14:12:37,540 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:37,540 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39245
-2022-08-26 14:12:37,540 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:37,541 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39245
-2022-08-26 14:12:37,541 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:37,541 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39245
-2022-08-26 14:12:37,541 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:37,541 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39245
-2022-08-26 14:12:37,541 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:37,542 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39245
-2022-08-26 14:12:37,542 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:37,542 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39245
-2022-08-26 14:12:37,542 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:37,543 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:37,543 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:37,544 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:37,544 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:37,544 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:37,544 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:37,544 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:37,544 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:37,544 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:37,544 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:37,544 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:37,544 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:37,544 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:37,544 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:37,544 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:37,544 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:37,544 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:37,545 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:37,545 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:37,545 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:37,545 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:37,560 - distributed.scheduler - INFO - Receive client connection: Client-cfcc4dfe-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:37,560 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:37,668 - distributed.scheduler - INFO - Remove client Client-cfcc4dfe-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:37,669 - distributed.scheduler - INFO - Remove client Client-cfcc4dfe-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:37,669 - distributed.scheduler - INFO - Close client connection: Client-cfcc4dfe-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:37,671 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42251
-2022-08-26 14:12:37,672 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:37755
-2022-08-26 14:12:37,672 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36927
-2022-08-26 14:12:37,672 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:40815
-2022-08-26 14:12:37,673 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:40413
-2022-08-26 14:12:37,673 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:45561
-2022-08-26 14:12:37,673 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:37077
-2022-08-26 14:12:37,674 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:45181
-2022-08-26 14:12:37,674 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:45091
-2022-08-26 14:12:37,674 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:33463
-2022-08-26 14:12:37,675 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:40841
-2022-08-26 14:12:37,675 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:33917
-2022-08-26 14:12:37,675 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:38855
-2022-08-26 14:12:37,675 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:39327
-2022-08-26 14:12:37,676 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:37983
-2022-08-26 14:12:37,676 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36729
-2022-08-26 14:12:37,676 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:43807
-2022-08-26 14:12:37,677 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36281
-2022-08-26 14:12:37,677 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36237
-2022-08-26 14:12:37,677 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:41859
-2022-08-26 14:12:37,678 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36539
-2022-08-26 14:12:37,683 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:42251', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:37,683 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:42251
-2022-08-26 14:12:37,683 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:37755', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:37,683 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:37755
-2022-08-26 14:12:37,684 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:36927', name: 2, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:37,684 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:36927
-2022-08-26 14:12:37,684 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:40815', name: 3, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:37,684 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:40815
-2022-08-26 14:12:37,684 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:40413', name: 4, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:37,684 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:40413
-2022-08-26 14:12:37,684 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:45561', name: 5, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:37,684 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:45561
-2022-08-26 14:12:37,684 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:37077', name: 6, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:37,685 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:37077
-2022-08-26 14:12:37,685 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:45181', name: 7, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:37,685 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:45181
-2022-08-26 14:12:37,685 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:45091', name: 8, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:37,685 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:45091
-2022-08-26 14:12:37,685 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:33463', name: 9, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:37,685 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:33463
-2022-08-26 14:12:37,685 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:40841', name: 10, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:37,685 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:40841
-2022-08-26 14:12:37,686 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:33917', name: 11, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:37,686 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:33917
-2022-08-26 14:12:37,686 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:38855', name: 12, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:37,686 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:38855
-2022-08-26 14:12:37,686 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:39327', name: 13, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:37,686 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:39327
-2022-08-26 14:12:37,686 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:37983', name: 14, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:37,686 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:37983
-2022-08-26 14:12:37,686 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:36729', name: 15, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:37,686 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:36729
-2022-08-26 14:12:37,687 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:43807', name: 16, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:37,687 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:43807
-2022-08-26 14:12:37,687 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:36281', name: 17, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:37,687 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:36281
-2022-08-26 14:12:37,687 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:36237', name: 18, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:37,687 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:36237
-2022-08-26 14:12:37,687 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:41859', name: 19, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:37,687 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:41859
-2022-08-26 14:12:37,687 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:36539', name: 20, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:37,688 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:36539
-2022-08-26 14:12:37,688 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:12:37,688 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-d2d02181-fabe-47f1-a7fa-6a5c97d8b9e7 Address tcp://127.0.0.1:42251 Status: Status.closing
-2022-08-26 14:12:37,688 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-696f0a93-7c62-4d79-8721-b73c29669a75 Address tcp://127.0.0.1:37755 Status: Status.closing
-2022-08-26 14:12:37,688 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-eb5e6580-6790-4b16-8db7-3d6d63f24a2e Address tcp://127.0.0.1:36927 Status: Status.closing
-2022-08-26 14:12:37,689 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-e5667838-c593-4746-87e4-e1af27922b36 Address tcp://127.0.0.1:40815 Status: Status.closing
-2022-08-26 14:12:37,689 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-10ded8c1-2189-4e64-9a39-e3381754f2d8 Address tcp://127.0.0.1:40413 Status: Status.closing
-2022-08-26 14:12:37,689 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-36af5809-2de3-4d90-8130-d6f453a54a73 Address tcp://127.0.0.1:45561 Status: Status.closing
-2022-08-26 14:12:37,689 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-c37c45f6-b0e5-4c95-93c3-ce6265069119 Address tcp://127.0.0.1:37077 Status: Status.closing
-2022-08-26 14:12:37,689 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-8e7a83ea-e27d-4a1a-b875-bac9ee8d0c29 Address tcp://127.0.0.1:45181 Status: Status.closing
-2022-08-26 14:12:37,690 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-0219d7a4-f84e-4001-8b67-1c5671ae8656 Address tcp://127.0.0.1:45091 Status: Status.closing
-2022-08-26 14:12:37,690 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-ad93ddfa-3140-4b28-89ae-2ba68124cc6d Address tcp://127.0.0.1:33463 Status: Status.closing
-2022-08-26 14:12:37,690 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-9afde227-49c2-47af-9f6d-5e3608c4ebef Address tcp://127.0.0.1:40841 Status: Status.closing
-2022-08-26 14:12:37,690 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-d6ab88b4-1780-491e-95ec-3f67576f53e4 Address tcp://127.0.0.1:33917 Status: Status.closing
-2022-08-26 14:12:37,690 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-053e3fcc-92ef-4e64-a95f-2f808102e459 Address tcp://127.0.0.1:38855 Status: Status.closing
-2022-08-26 14:12:37,691 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-f542323b-0541-4de5-9a26-ff07b7214e6a Address tcp://127.0.0.1:39327 Status: Status.closing
-2022-08-26 14:12:37,691 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-347243a8-62e0-4553-957f-f636049e97a4 Address tcp://127.0.0.1:37983 Status: Status.closing
-2022-08-26 14:12:37,691 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-dd355ca0-57bc-44cc-ad77-cb1107f32752 Address tcp://127.0.0.1:36729 Status: Status.closing
-2022-08-26 14:12:37,692 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-45d0aed3-f9dc-4416-b9b3-b38103bd6069 Address tcp://127.0.0.1:43807 Status: Status.closing
-2022-08-26 14:12:37,692 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-c55ed48c-730e-431a-a6ac-2ef3fda76fbb Address tcp://127.0.0.1:36281 Status: Status.closing
-2022-08-26 14:12:37,692 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-11acf470-8393-4c43-9872-e5d4b36ea34b Address tcp://127.0.0.1:36237 Status: Status.closing
-2022-08-26 14:12:37,692 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-a5023c28-bdd4-4de4-a523-dcfaba5629d8 Address tcp://127.0.0.1:41859 Status: Status.closing
-2022-08-26 14:12:37,692 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-e80b1362-48a3-4880-9f3d-627965157597 Address tcp://127.0.0.1:36539 Status: Status.closing
-2022-08-26 14:12:37,699 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:12:37,699 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_multiple_transfers 2022-08-26 14:12:37,942 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:37,943 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:37,943 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:42217
-2022-08-26 14:12:37,943 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:41235
-2022-08-26 14:12:37,950 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33451
-2022-08-26 14:12:37,950 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33451
-2022-08-26 14:12:37,950 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:12:37,950 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43033
-2022-08-26 14:12:37,950 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:42217
-2022-08-26 14:12:37,950 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:37,950 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:37,950 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:37,950 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-fav66tfc
-2022-08-26 14:12:37,951 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:37,951 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43263
-2022-08-26 14:12:37,951 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43263
-2022-08-26 14:12:37,951 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:12:37,951 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41331
-2022-08-26 14:12:37,951 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:42217
-2022-08-26 14:12:37,951 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:37,952 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:37,952 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:37,952 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-2rmqmw5z
-2022-08-26 14:12:37,952 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:37,952 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36497
-2022-08-26 14:12:37,952 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36497
-2022-08-26 14:12:37,952 - distributed.worker - INFO -           Worker name:                          2
-2022-08-26 14:12:37,952 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41505
-2022-08-26 14:12:37,953 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:42217
-2022-08-26 14:12:37,953 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:37,953 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:37,953 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:37,953 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ebuekcoc
-2022-08-26 14:12:37,953 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:37,957 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33451', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:37,957 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33451
-2022-08-26 14:12:37,957 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:37,957 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:43263', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:37,958 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:43263
-2022-08-26 14:12:37,958 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:37,958 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36497', name: 2, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:37,958 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36497
-2022-08-26 14:12:37,958 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:37,959 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:42217
-2022-08-26 14:12:37,959 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:37,959 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:42217
-2022-08-26 14:12:37,959 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:37,959 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:42217
-2022-08-26 14:12:37,959 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:37,960 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:37,960 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:37,960 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:37,974 - distributed.scheduler - INFO - Receive client connection: Client-d00b8570-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:37,974 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:37,996 - distributed.scheduler - INFO - Remove client Client-d00b8570-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:37,997 - distributed.scheduler - INFO - Remove client Client-d00b8570-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:37,997 - distributed.scheduler - INFO - Close client connection: Client-d00b8570-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:37,999 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:33451
-2022-08-26 14:12:37,999 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:43263
-2022-08-26 14:12:37,999 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36497
-2022-08-26 14:12:38,000 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-06157bc9-ccc5-4a0f-a96d-45ee34631be2 Address tcp://127.0.0.1:33451 Status: Status.closing
-2022-08-26 14:12:38,001 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-2ef9034e-ca40-4445-b4f2-895586310b81 Address tcp://127.0.0.1:43263 Status: Status.closing
-2022-08-26 14:12:38,001 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-0aa066a3-977e-43be-9c3c-20fc987c2d5c Address tcp://127.0.0.1:36497 Status: Status.closing
-2022-08-26 14:12:38,002 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:33451', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:38,002 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:33451
-2022-08-26 14:12:38,002 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:43263', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:38,002 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:43263
-2022-08-26 14:12:38,002 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:36497', name: 2, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:38,002 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:36497
-2022-08-26 14:12:38,002 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:12:38,004 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:12:38,004 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_share_communication 2022-08-26 14:12:38,239 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:38,241 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:38,241 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:39677
-2022-08-26 14:12:38,241 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:41713
-2022-08-26 14:12:38,247 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36681
-2022-08-26 14:12:38,247 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36681
-2022-08-26 14:12:38,247 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:12:38,247 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40177
-2022-08-26 14:12:38,247 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39677
-2022-08-26 14:12:38,248 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:38,248 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:38,248 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:38,248 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-yerhtvis
-2022-08-26 14:12:38,248 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:38,248 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39813
-2022-08-26 14:12:38,248 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39813
-2022-08-26 14:12:38,248 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:12:38,248 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40213
-2022-08-26 14:12:38,248 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39677
-2022-08-26 14:12:38,248 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:38,249 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:38,249 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:38,249 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-wfdkazov
-2022-08-26 14:12:38,249 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:38,249 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38563
-2022-08-26 14:12:38,249 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38563
-2022-08-26 14:12:38,249 - distributed.worker - INFO -           Worker name:                          2
-2022-08-26 14:12:38,249 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35707
-2022-08-26 14:12:38,249 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39677
-2022-08-26 14:12:38,249 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:38,249 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:38,250 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:38,250 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-gd93zos1
-2022-08-26 14:12:38,250 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:38,253 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36681', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:38,254 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36681
-2022-08-26 14:12:38,254 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:38,254 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39813', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:38,254 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39813
-2022-08-26 14:12:38,254 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:38,255 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38563', name: 2, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:38,255 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38563
-2022-08-26 14:12:38,255 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:38,255 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39677
-2022-08-26 14:12:38,255 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:38,256 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39677
-2022-08-26 14:12:38,256 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:38,256 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39677
-2022-08-26 14:12:38,256 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:38,256 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:38,256 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:38,256 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:38,270 - distributed.scheduler - INFO - Receive client connection: Client-d038c643-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:38,270 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:38,282 - distributed.scheduler - INFO - Remove client Client-d038c643-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:38,282 - distributed.scheduler - INFO - Remove client Client-d038c643-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:38,282 - distributed.scheduler - INFO - Close client connection: Client-d038c643-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:38,282 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36681
-2022-08-26 14:12:38,283 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:39813
-2022-08-26 14:12:38,283 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:38563
-2022-08-26 14:12:38,284 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:36681', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:38,285 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:36681
-2022-08-26 14:12:38,285 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:39813', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:38,285 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:39813
-2022-08-26 14:12:38,285 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:38563', name: 2, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:38,285 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:38563
-2022-08-26 14:12:38,285 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:12:38,285 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-347c428e-7e7d-4f47-a3fa-8abcf794eb40 Address tcp://127.0.0.1:36681 Status: Status.closing
-2022-08-26 14:12:38,285 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-6b35c651-7ffe-400d-97d1-196c35357c4c Address tcp://127.0.0.1:39813 Status: Status.closing
-2022-08-26 14:12:38,286 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-6bf83684-a5b0-4888-a8c1-f8856aa86ec2 Address tcp://127.0.0.1:38563 Status: Status.closing
-2022-08-26 14:12:38,287 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:12:38,287 - distributed.scheduler - INFO - Scheduler closing all comms
-XFAIL (ve...)
-distributed/tests/test_worker.py::test_dont_overlap_communications_to_same_worker 2022-08-26 14:12:38,429 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:38,431 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:38,431 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:45863
-2022-08-26 14:12:38,431 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:39117
-2022-08-26 14:12:38,435 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37247
-2022-08-26 14:12:38,435 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37247
-2022-08-26 14:12:38,436 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:12:38,436 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39499
-2022-08-26 14:12:38,436 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:45863
-2022-08-26 14:12:38,436 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:38,436 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:38,436 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:38,436 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-9l5j_dkh
-2022-08-26 14:12:38,436 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:38,436 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35419
-2022-08-26 14:12:38,436 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35419
-2022-08-26 14:12:38,437 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:12:38,437 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40903
-2022-08-26 14:12:38,437 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:45863
-2022-08-26 14:12:38,437 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:38,437 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:12:38,437 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:38,437 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-jorz65uf
-2022-08-26 14:12:38,437 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:38,440 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37247', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:38,440 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37247
-2022-08-26 14:12:38,440 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:38,440 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:35419', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:38,441 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:35419
-2022-08-26 14:12:38,441 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:38,441 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:45863
-2022-08-26 14:12:38,441 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:38,441 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:45863
-2022-08-26 14:12:38,441 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:38,442 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:38,442 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:38,455 - distributed.scheduler - INFO - Receive client connection: Client-d0550272-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:38,455 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:38,467 - distributed.scheduler - INFO - Remove client Client-d0550272-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:38,467 - distributed.scheduler - INFO - Remove client Client-d0550272-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:38,467 - distributed.scheduler - INFO - Close client connection: Client-d0550272-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:38,467 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:37247
-2022-08-26 14:12:38,468 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:35419
-2022-08-26 14:12:38,469 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:37247', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:38,469 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:37247
-2022-08-26 14:12:38,469 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:35419', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:38,469 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:35419
-2022-08-26 14:12:38,469 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:12:38,469 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-3d946aa0-e64c-4c50-b279-f1511f4a8f2c Address tcp://127.0.0.1:37247 Status: Status.closing
-2022-08-26 14:12:38,469 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-02b294cd-26d1-4907-b1aa-c21ccb882b2e Address tcp://127.0.0.1:35419 Status: Status.closing
-2022-08-26 14:12:38,470 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:12:38,470 - distributed.scheduler - INFO - Scheduler closing all comms
-XFAIL
-distributed/tests/test_worker.py::test_log_exception_on_failed_task 2022-08-26 14:12:38,612 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:38,614 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:38,614 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:36909
-2022-08-26 14:12:38,614 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:35407
-2022-08-26 14:12:38,618 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:41361
-2022-08-26 14:12:38,619 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:41361
-2022-08-26 14:12:38,619 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:12:38,619 - distributed.worker - INFO -          dashboard at:            127.0.0.1:37953
-2022-08-26 14:12:38,619 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36909
-2022-08-26 14:12:38,619 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:38,619 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:38,619 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:38,619 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-e0ahl8zq
-2022-08-26 14:12:38,619 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:38,619 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45903
-2022-08-26 14:12:38,620 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45903
-2022-08-26 14:12:38,620 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:12:38,620 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40085
-2022-08-26 14:12:38,620 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36909
-2022-08-26 14:12:38,620 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:38,620 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:12:38,620 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:38,620 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-_zzeoxe1
-2022-08-26 14:12:38,620 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:38,623 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:41361', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:38,623 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:41361
-2022-08-26 14:12:38,623 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:38,623 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45903', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:38,624 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45903
-2022-08-26 14:12:38,624 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:38,624 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36909
-2022-08-26 14:12:38,624 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:38,624 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36909
-2022-08-26 14:12:38,624 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:38,625 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:38,625 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:38,638 - distributed.scheduler - INFO - Receive client connection: Client-d070f412-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:38,639 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:38,651 - distributed.worker - WARNING - Compute Failed
-Key:       div-beaac0206246b34d3625d21194e03c13
-Function:  div
-args:      (1, 0)
-kwargs:    {}
-Exception: "ZeroDivisionError('division by zero')"
-
-2022-08-26 14:12:38,764 - distributed.scheduler - INFO - Remove client Client-d070f412-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:38,765 - distributed.scheduler - INFO - Remove client Client-d070f412-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:38,765 - distributed.scheduler - INFO - Close client connection: Client-d070f412-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:38,765 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:41361
-2022-08-26 14:12:38,766 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:45903
-2022-08-26 14:12:38,766 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:41361', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:38,767 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:41361
-2022-08-26 14:12:38,767 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:45903', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:38,767 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:45903
-2022-08-26 14:12:38,767 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:12:38,767 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-299d90ae-73f6-4631-9490-6a7789a7dcb6 Address tcp://127.0.0.1:41361 Status: Status.closing
-2022-08-26 14:12:38,767 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-17aed3dc-7592-4995-acc5-ccc86f779d4b Address tcp://127.0.0.1:45903 Status: Status.closing
-2022-08-26 14:12:38,768 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:12:38,768 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_clean_up_dependencies 2022-08-26 14:12:39,001 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:39,003 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:39,003 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:39007
-2022-08-26 14:12:39,003 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:38271
-2022-08-26 14:12:39,008 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38343
-2022-08-26 14:12:39,008 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38343
-2022-08-26 14:12:39,008 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:12:39,008 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45219
-2022-08-26 14:12:39,008 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39007
-2022-08-26 14:12:39,008 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:39,008 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:39,008 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:39,008 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-n83wofen
-2022-08-26 14:12:39,008 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:39,008 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33359
-2022-08-26 14:12:39,009 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33359
-2022-08-26 14:12:39,009 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:12:39,009 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33025
-2022-08-26 14:12:39,009 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39007
-2022-08-26 14:12:39,009 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:39,009 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:12:39,009 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:39,009 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-3jepjm63
-2022-08-26 14:12:39,009 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:39,012 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38343', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:39,012 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38343
-2022-08-26 14:12:39,012 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:39,012 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33359', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:39,013 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33359
-2022-08-26 14:12:39,013 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:39,013 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39007
-2022-08-26 14:12:39,013 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:39,013 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39007
-2022-08-26 14:12:39,013 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:39,014 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:39,014 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:39,027 - distributed.scheduler - INFO - Receive client connection: Client-d0ac4c3d-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:39,027 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:39,070 - distributed.scheduler - INFO - Remove client Client-d0ac4c3d-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:39,070 - distributed.scheduler - INFO - Remove client Client-d0ac4c3d-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:39,071 - distributed.scheduler - INFO - Close client connection: Client-d0ac4c3d-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:39,071 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:38343
-2022-08-26 14:12:39,071 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:33359
-2022-08-26 14:12:39,072 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:38343', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:39,072 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:38343
-2022-08-26 14:12:39,072 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:33359', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:39,072 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:33359
-2022-08-26 14:12:39,072 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:12:39,073 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-711512eb-b4a5-41e0-b9e7-34d9405cb71c Address tcp://127.0.0.1:38343 Status: Status.closing
-2022-08-26 14:12:39,073 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-5c25dcbe-668d-4b9b-a9b0-9f8f8253955f Address tcp://127.0.0.1:33359 Status: Status.closing
-2022-08-26 14:12:39,074 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:12:39,075 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_hold_onto_dependents 2022-08-26 14:12:39,304 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:39,306 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:39,306 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:46377
-2022-08-26 14:12:39,306 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:38443
-2022-08-26 14:12:39,311 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35021
-2022-08-26 14:12:39,311 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35021
-2022-08-26 14:12:39,311 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:12:39,311 - distributed.worker - INFO -          dashboard at:            127.0.0.1:37563
-2022-08-26 14:12:39,311 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:46377
-2022-08-26 14:12:39,311 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:39,311 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:39,311 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:39,311 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-xb81_251
-2022-08-26 14:12:39,311 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:39,311 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42193
-2022-08-26 14:12:39,311 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42193
-2022-08-26 14:12:39,312 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:12:39,312 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35365
-2022-08-26 14:12:39,312 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:46377
-2022-08-26 14:12:39,312 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:39,312 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:12:39,312 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:39,312 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-l4tp89b8
-2022-08-26 14:12:39,312 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:39,315 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:35021', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:39,315 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:35021
-2022-08-26 14:12:39,315 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:39,315 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42193', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:39,316 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42193
-2022-08-26 14:12:39,316 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:39,316 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:46377
-2022-08-26 14:12:39,316 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:39,316 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:46377
-2022-08-26 14:12:39,316 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:39,317 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:39,317 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:39,330 - distributed.scheduler - INFO - Receive client connection: Client-d0da88a0-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:39,331 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:39,350 - distributed.scheduler - INFO - Client Client-d0da88a0-2583-11ed-a99d-00d861bc4509 requests to cancel 1 keys
-2022-08-26 14:12:39,350 - distributed.scheduler - INFO - Scheduler cancels key inc-64456584d8a2e7176e2cd177efaa15f2.  Force=False
-2022-08-26 14:12:39,353 - distributed.scheduler - INFO - Remove client Client-d0da88a0-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:39,353 - distributed.scheduler - INFO - Remove client Client-d0da88a0-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:39,353 - distributed.scheduler - INFO - Close client connection: Client-d0da88a0-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:39,354 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:35021
-2022-08-26 14:12:39,354 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42193
-2022-08-26 14:12:39,355 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:42193', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:39,355 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:42193
-2022-08-26 14:12:39,355 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-51b4c142-3e1b-4b21-beb8-5eb0e5c1c56b Address tcp://127.0.0.1:35021 Status: Status.closing
-2022-08-26 14:12:39,355 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-fbc95d0a-ff5e-49f0-9217-6096f4a4bda9 Address tcp://127.0.0.1:42193 Status: Status.closing
-2022-08-26 14:12:39,356 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:35021', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:39,356 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:35021
-2022-08-26 14:12:39,356 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:12:39,357 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:12:39,357 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_worker_death_timeout 2022-08-26 14:12:39,588 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36833
-2022-08-26 14:12:39,589 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36833
-2022-08-26 14:12:39,589 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42935
-2022-08-26 14:12:39,589 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:12345
-2022-08-26 14:12:39,589 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:39,589 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:12:39,589 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:39,589 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-1p17e9fx
-2022-08-26 14:12:39,589 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:39,688 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36833
-2022-08-26 14:12:39,689 - distributed.worker - INFO - Closed worker has not yet started: Status.init
-XPASS (a...)
-distributed/tests/test_worker.py::test_stop_doing_unnecessary_work 2022-08-26 14:12:39,694 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:39,696 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:39,696 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:40497
-2022-08-26 14:12:39,696 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:34085
-2022-08-26 14:12:39,701 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43561
-2022-08-26 14:12:39,701 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43561
-2022-08-26 14:12:39,701 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:12:39,701 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42385
-2022-08-26 14:12:39,701 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:40497
-2022-08-26 14:12:39,701 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:39,701 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:39,701 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:39,701 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-6fwnnapk
-2022-08-26 14:12:39,701 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:39,702 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36361
-2022-08-26 14:12:39,702 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36361
-2022-08-26 14:12:39,702 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:12:39,702 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35687
-2022-08-26 14:12:39,702 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:40497
-2022-08-26 14:12:39,702 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:39,702 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:12:39,702 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:39,702 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-3ue3h6kk
-2022-08-26 14:12:39,702 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:39,705 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:43561', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:39,705 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:43561
-2022-08-26 14:12:39,705 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:39,706 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36361', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:39,706 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36361
-2022-08-26 14:12:39,706 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:39,706 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:40497
-2022-08-26 14:12:39,706 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:39,706 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:40497
-2022-08-26 14:12:39,706 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:39,707 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:39,707 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:39,720 - distributed.scheduler - INFO - Receive client connection: Client-d1161041-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:39,721 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:40,162 - distributed.scheduler - INFO - Remove client Client-d1161041-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:40,162 - distributed.scheduler - INFO - Remove client Client-d1161041-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:40,163 - distributed.scheduler - INFO - Close client connection: Client-d1161041-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:40,165 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:43561
-2022-08-26 14:12:40,165 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36361
-2022-08-26 14:12:40,166 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-7d288fa6-83d9-4d9f-aaf7-d6d977913128 Address tcp://127.0.0.1:43561 Status: Status.closing
-2022-08-26 14:12:40,166 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-be3e59ac-3302-403d-a396-d28a3b2ee15d Address tcp://127.0.0.1:36361 Status: Status.closing
-2022-08-26 14:12:40,167 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:43561', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:40,167 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:43561
-2022-08-26 14:12:40,167 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:36361', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:40,167 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:36361
-2022-08-26 14:12:40,167 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:12:40,168 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:12:40,168 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_priorities 2022-08-26 14:12:40,413 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:40,415 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:40,415 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:36393
-2022-08-26 14:12:40,415 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:41015
-2022-08-26 14:12:40,418 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36439
-2022-08-26 14:12:40,418 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36439
-2022-08-26 14:12:40,418 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:12:40,418 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35291
-2022-08-26 14:12:40,418 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36393
-2022-08-26 14:12:40,418 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:40,418 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:40,418 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:40,418 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-njxar41p
-2022-08-26 14:12:40,418 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:40,420 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36439', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:40,420 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36439
-2022-08-26 14:12:40,420 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:40,421 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36393
-2022-08-26 14:12:40,421 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:40,421 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:40,434 - distributed.scheduler - INFO - Receive client connection: Client-d182ff99-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:40,434 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:40,594 - distributed.scheduler - INFO - Remove client Client-d182ff99-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:40,594 - distributed.scheduler - INFO - Remove client Client-d182ff99-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:40,594 - distributed.scheduler - INFO - Close client connection: Client-d182ff99-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:40,595 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36439
-2022-08-26 14:12:40,595 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:36439', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:40,596 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:36439
-2022-08-26 14:12:40,596 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:12:40,596 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-45353cd5-2327-4965-9cff-f54ec09ad0f1 Address tcp://127.0.0.1:36439 Status: Status.closing
-2022-08-26 14:12:40,596 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:12:40,597 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_heartbeats 2022-08-26 14:12:40,847 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:40,848 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:40,848 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:34717
-2022-08-26 14:12:40,849 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:45157
-2022-08-26 14:12:40,853 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37227
-2022-08-26 14:12:40,853 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37227
-2022-08-26 14:12:40,853 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:12:40,853 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41299
-2022-08-26 14:12:40,853 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34717
-2022-08-26 14:12:40,853 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:40,853 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:40,853 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:40,854 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-bw_fkf_7
-2022-08-26 14:12:40,854 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:40,854 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:41437
-2022-08-26 14:12:40,854 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:41437
-2022-08-26 14:12:40,854 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:12:40,854 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36309
-2022-08-26 14:12:40,854 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34717
-2022-08-26 14:12:40,854 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:40,854 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:12:40,854 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:40,854 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-xac8b0sy
-2022-08-26 14:12:40,855 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:40,857 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37227', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:40,858 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37227
-2022-08-26 14:12:40,858 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:40,858 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:41437', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:40,858 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:41437
-2022-08-26 14:12:40,858 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:40,859 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34717
-2022-08-26 14:12:40,859 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:40,859 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34717
-2022-08-26 14:12:40,859 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:40,859 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:40,859 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:40,873 - distributed.scheduler - INFO - Receive client connection: Client-d1c5e8f5-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:40,873 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:41,975 - distributed.scheduler - INFO - Remove client Client-d1c5e8f5-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:41,975 - distributed.scheduler - INFO - Remove client Client-d1c5e8f5-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:41,975 - distributed.scheduler - INFO - Close client connection: Client-d1c5e8f5-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:41,976 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:37227
-2022-08-26 14:12:41,976 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:41437
-2022-08-26 14:12:41,977 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:37227', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:41,977 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:37227
-2022-08-26 14:12:41,977 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:41437', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:41,977 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:41437
-2022-08-26 14:12:41,977 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:12:41,978 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-803b69a1-cb58-4727-b546-1bce62a8b09a Address tcp://127.0.0.1:37227 Status: Status.closing
-2022-08-26 14:12:41,978 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-c1fd7dbf-730c-43a5-b633-a51fab770962 Address tcp://127.0.0.1:41437 Status: Status.closing
-2022-08-26 14:12:41,979 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:12:41,979 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_worker_dir[Worker] 2022-08-26 14:12:42,209 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:42,211 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:42,211 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:44767
-2022-08-26 14:12:42,211 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:46075
-2022-08-26 14:12:42,215 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38081
-2022-08-26 14:12:42,215 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38081
-2022-08-26 14:12:42,215 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:12:42,216 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45659
-2022-08-26 14:12:42,216 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44767
-2022-08-26 14:12:42,216 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:42,216 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:42,216 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:42,216 - distributed.worker - INFO -       Local Directory: /tmp/pytest-of-matthew/pytest-12/test_worker_dir_Worker_0/dask-worker-space/worker-dcxa6ovd
-2022-08-26 14:12:42,216 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:42,216 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45287
-2022-08-26 14:12:42,216 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45287
-2022-08-26 14:12:42,216 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:12:42,216 - distributed.worker - INFO -          dashboard at:            127.0.0.1:32883
-2022-08-26 14:12:42,217 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44767
-2022-08-26 14:12:42,217 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:42,217 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:12:42,217 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:42,217 - distributed.worker - INFO -       Local Directory: /tmp/pytest-of-matthew/pytest-12/test_worker_dir_Worker_0/dask-worker-space/worker-92o0tsrm
-2022-08-26 14:12:42,217 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:42,220 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38081', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:42,220 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38081
-2022-08-26 14:12:42,220 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:42,220 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45287', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:42,221 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45287
-2022-08-26 14:12:42,221 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:42,221 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44767
-2022-08-26 14:12:42,221 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:42,221 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44767
-2022-08-26 14:12:42,221 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:42,221 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:42,222 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:42,235 - distributed.scheduler - INFO - Receive client connection: Client-d295c7b0-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:42,235 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:42,247 - distributed.scheduler - INFO - Remove client Client-d295c7b0-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:42,247 - distributed.scheduler - INFO - Remove client Client-d295c7b0-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:42,247 - distributed.scheduler - INFO - Close client connection: Client-d295c7b0-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:42,247 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:38081
-2022-08-26 14:12:42,248 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:45287
-2022-08-26 14:12:42,249 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:38081', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:42,249 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:38081
-2022-08-26 14:12:42,249 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:45287', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:42,249 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:45287
-2022-08-26 14:12:42,249 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:12:42,249 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-6f58e489-456d-4b23-818f-32920bbb8c66 Address tcp://127.0.0.1:38081 Status: Status.closing
-2022-08-26 14:12:42,249 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-63c361ad-f911-4068-bf2c-c93cbeba1a28 Address tcp://127.0.0.1:45287 Status: Status.closing
-2022-08-26 14:12:42,250 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:12:42,250 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_worker_dir[Nanny] 2022-08-26 14:12:42,480 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:42,482 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:42,482 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:41467
-2022-08-26 14:12:42,482 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:36387
-2022-08-26 14:12:42,487 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43457
-2022-08-26 14:12:42,487 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43457
-2022-08-26 14:12:42,487 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:12:42,487 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46731
-2022-08-26 14:12:42,487 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:41467
-2022-08-26 14:12:42,487 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:42,487 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:42,487 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:42,487 - distributed.worker - INFO -       Local Directory: /tmp/pytest-of-matthew/pytest-12/test_worker_dir_Nanny_0/dask-worker-space/worker-b370sbv7
-2022-08-26 14:12:42,487 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:42,488 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:32919
-2022-08-26 14:12:42,488 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:32919
-2022-08-26 14:12:42,488 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:12:42,488 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41059
-2022-08-26 14:12:42,488 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:41467
-2022-08-26 14:12:42,488 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:42,488 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:12:42,488 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:42,488 - distributed.worker - INFO -       Local Directory: /tmp/pytest-of-matthew/pytest-12/test_worker_dir_Nanny_0/dask-worker-space/worker-sug6vvgd
-2022-08-26 14:12:42,488 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:42,491 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:43457', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:42,491 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:43457
-2022-08-26 14:12:42,491 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:42,492 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:32919', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:42,492 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:32919
-2022-08-26 14:12:42,492 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:42,492 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:41467
-2022-08-26 14:12:42,492 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:42,492 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:41467
-2022-08-26 14:12:42,492 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:42,493 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:42,493 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:42,506 - distributed.scheduler - INFO - Receive client connection: Client-d2bf29d0-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:42,507 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:42,518 - distributed.scheduler - INFO - Remove client Client-d2bf29d0-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:42,518 - distributed.scheduler - INFO - Remove client Client-d2bf29d0-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:42,518 - distributed.scheduler - INFO - Close client connection: Client-d2bf29d0-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:42,519 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:43457
-2022-08-26 14:12:42,519 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:32919
-2022-08-26 14:12:42,520 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:43457', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:42,520 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:43457
-2022-08-26 14:12:42,520 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:32919', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:42,520 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:32919
-2022-08-26 14:12:42,520 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:12:42,520 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-763c45be-5393-47f1-a827-77dffe440cf4 Address tcp://127.0.0.1:43457 Status: Status.closing
-2022-08-26 14:12:42,520 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-f88926ae-e7da-48fa-8b65-7cdb951dab87 Address tcp://127.0.0.1:32919 Status: Status.closing
-2022-08-26 14:12:42,521 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:12:42,521 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_default_worker_dir 2022-08-26 14:12:42,750 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:42,752 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:42,752 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:43997
-2022-08-26 14:12:42,752 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:40517
-2022-08-26 14:12:42,755 - distributed.scheduler - INFO - Receive client connection: Client-d2e5211c-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:42,755 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:42,758 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36717
-2022-08-26 14:12:42,759 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36717
-2022-08-26 14:12:42,759 - distributed.worker - INFO -          dashboard at:            127.0.0.1:44263
-2022-08-26 14:12:42,759 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43997
-2022-08-26 14:12:42,759 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:42,759 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:12:42,759 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:42,759 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-c7n3i30d
-2022-08-26 14:12:42,759 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:42,761 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36717', status: init, memory: 0, processing: 0>
-2022-08-26 14:12:42,761 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36717
-2022-08-26 14:12:42,761 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:42,761 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43997
-2022-08-26 14:12:42,761 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:42,762 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36717
-2022-08-26 14:12:42,762 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:42,762 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-3c4ae6e5-74ea-4d1f-a6fe-0b361c63383d Address tcp://127.0.0.1:36717 Status: Status.closing
-2022-08-26 14:12:42,763 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:36717', status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:42,763 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:36717
-2022-08-26 14:12:42,763 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:12:42,766 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:38569'
-2022-08-26 14:12:43,514 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36815
-2022-08-26 14:12:43,514 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36815
-2022-08-26 14:12:43,514 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42085
-2022-08-26 14:12:43,514 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43997
-2022-08-26 14:12:43,514 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:43,514 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:12:43,514 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:43,514 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-c04rq_ym
-2022-08-26 14:12:43,514 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:43,818 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36815', status: init, memory: 0, processing: 0>
-2022-08-26 14:12:43,819 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36815
-2022-08-26 14:12:43,819 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:43,819 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43997
-2022-08-26 14:12:43,819 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:43,819 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:43,855 - distributed.worker - INFO - Run out-of-band function 'lambda'
-2022-08-26 14:12:43,855 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:38569'.
-2022-08-26 14:12:43,855 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:12:43,856 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36815
-2022-08-26 14:12:43,857 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-c9013444-4515-4065-b8c3-1bfee946aa86 Address tcp://127.0.0.1:36815 Status: Status.closing
-2022-08-26 14:12:43,857 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:36815', status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:43,857 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:36815
-2022-08-26 14:12:43,857 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:12:43,983 - distributed.scheduler - INFO - Remove client Client-d2e5211c-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:43,983 - distributed.scheduler - INFO - Remove client Client-d2e5211c-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:43,984 - distributed.scheduler - INFO - Close client connection: Client-d2e5211c-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:43,984 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:12:43,984 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_dataframe_attribute_error 2022-08-26 14:12:44,216 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:44,218 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:44,218 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:36749
-2022-08-26 14:12:44,218 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:45733
-2022-08-26 14:12:44,223 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:44941
-2022-08-26 14:12:44,223 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:44941
-2022-08-26 14:12:44,223 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:12:44,223 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40851
-2022-08-26 14:12:44,223 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36749
-2022-08-26 14:12:44,223 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:44,223 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:44,223 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:44,223 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-xh3xvpg9
-2022-08-26 14:12:44,223 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:44,224 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38395
-2022-08-26 14:12:44,224 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38395
-2022-08-26 14:12:44,224 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:12:44,224 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40759
-2022-08-26 14:12:44,224 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36749
-2022-08-26 14:12:44,224 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:44,224 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:12:44,224 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:44,224 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-_1gs2o06
-2022-08-26 14:12:44,224 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:44,227 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:44941', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:44,227 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:44941
-2022-08-26 14:12:44,228 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:44,228 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38395', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:44,228 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38395
-2022-08-26 14:12:44,228 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:44,229 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36749
-2022-08-26 14:12:44,229 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:44,229 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36749
-2022-08-26 14:12:44,229 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:44,229 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:44,229 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:44,243 - distributed.scheduler - INFO - Receive client connection: Client-d3c821c2-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:44,243 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:44,255 - distributed.sizeof - WARNING - Sizeof calculation failed. Defaulting to 0.95 MiB
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/sizeof.py", line 17, in safe_sizeof
-    return sizeof(obj)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/dask/utils.py", line 637, in __call__
-    return meth(arg, *args, **kwargs)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/dask/sizeof.py", line 17, in sizeof_default
-    return sys.getsizeof(o)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/tests/test_worker.py", line 970, in __sizeof__
-    raise TypeError("Hello")
-TypeError: Hello
-2022-08-26 14:12:44,256 - distributed.sizeof - WARNING - Sizeof calculation failed. Defaulting to 0.95 MiB
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/sizeof.py", line 17, in safe_sizeof
-    return sizeof(obj)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/dask/utils.py", line 637, in __call__
-    return meth(arg, *args, **kwargs)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/dask/sizeof.py", line 17, in sizeof_default
-    return sys.getsizeof(o)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/tests/test_worker.py", line 970, in __sizeof__
-    raise TypeError("Hello")
-TypeError: Hello
-2022-08-26 14:12:44,265 - distributed.scheduler - INFO - Remove client Client-d3c821c2-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:44,265 - distributed.scheduler - INFO - Remove client Client-d3c821c2-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:44,266 - distributed.scheduler - INFO - Close client connection: Client-d3c821c2-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:44,266 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:44941
-2022-08-26 14:12:44,267 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:38395
-2022-08-26 14:12:44,268 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:44941', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:44,268 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:44941
-2022-08-26 14:12:44,268 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-11ad528c-1d94-4e23-a401-147020dd2b9a Address tcp://127.0.0.1:44941 Status: Status.closing
-2022-08-26 14:12:44,268 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-e5007cf8-9717-4408-82db-5f27130a3b5d Address tcp://127.0.0.1:38395 Status: Status.closing
-2022-08-26 14:12:44,269 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:38395', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:44,269 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:38395
-2022-08-26 14:12:44,269 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:12:44,270 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:12:44,270 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_pid 2022-08-26 14:12:44,500 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:44,502 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:44,502 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:46455
-2022-08-26 14:12:44,502 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:41509
-2022-08-26 14:12:44,506 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38309
-2022-08-26 14:12:44,506 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38309
-2022-08-26 14:12:44,507 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:12:44,507 - distributed.worker - INFO -          dashboard at:            127.0.0.1:44909
-2022-08-26 14:12:44,507 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:46455
-2022-08-26 14:12:44,507 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:44,507 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:44,507 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:44,507 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-6bb6uafb
-2022-08-26 14:12:44,507 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:44,507 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37823
-2022-08-26 14:12:44,508 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37823
-2022-08-26 14:12:44,508 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:12:44,508 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41473
-2022-08-26 14:12:44,508 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:46455
-2022-08-26 14:12:44,508 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:44,508 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:12:44,508 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:44,508 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-2az_k9qi
-2022-08-26 14:12:44,508 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:44,511 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38309', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:44,511 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38309
-2022-08-26 14:12:44,511 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:44,512 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37823', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:44,512 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37823
-2022-08-26 14:12:44,512 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:44,512 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:46455
-2022-08-26 14:12:44,512 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:44,513 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:46455
-2022-08-26 14:12:44,513 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:44,513 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:44,513 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:44,524 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:38309
-2022-08-26 14:12:44,524 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:37823
-2022-08-26 14:12:44,525 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:38309', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:44,525 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:38309
-2022-08-26 14:12:44,526 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:37823', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:44,526 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:37823
-2022-08-26 14:12:44,526 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:12:44,526 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-46c6e6e6-f72a-4b4a-bf21-66558e46ff07 Address tcp://127.0.0.1:38309 Status: Status.closing
-2022-08-26 14:12:44,526 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-2cb0a471-c41d-4e71-8a10-ad9bb8825b25 Address tcp://127.0.0.1:37823 Status: Status.closing
-2022-08-26 14:12:44,527 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:12:44,527 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_get_client 2022-08-26 14:12:44,757 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:44,759 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:44,759 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:36133
-2022-08-26 14:12:44,759 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:39091
-2022-08-26 14:12:44,764 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37171
-2022-08-26 14:12:44,764 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37171
-2022-08-26 14:12:44,764 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:12:44,764 - distributed.worker - INFO -          dashboard at:            127.0.0.1:44217
-2022-08-26 14:12:44,764 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36133
-2022-08-26 14:12:44,764 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:44,764 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:44,764 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:44,764 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-iayfvyd9
-2022-08-26 14:12:44,764 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:44,765 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33179
-2022-08-26 14:12:44,765 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33179
-2022-08-26 14:12:44,765 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:12:44,765 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36823
-2022-08-26 14:12:44,765 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36133
-2022-08-26 14:12:44,765 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:44,765 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:12:44,765 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:44,765 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-2nhwmlgf
-2022-08-26 14:12:44,765 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:44,768 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37171', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:44,768 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37171
-2022-08-26 14:12:44,769 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:44,769 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33179', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:44,769 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33179
-2022-08-26 14:12:44,769 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:44,769 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36133
-2022-08-26 14:12:44,770 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:44,770 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36133
-2022-08-26 14:12:44,770 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:44,770 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:44,770 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:44,784 - distributed.scheduler - INFO - Receive client connection: Client-d41aae62-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:44,784 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:45,030 - distributed.scheduler - INFO - Remove client Client-d41aae62-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:45,031 - distributed.scheduler - INFO - Remove client Client-d41aae62-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:45,031 - distributed.scheduler - INFO - Close client connection: Client-d41aae62-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:45,032 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:37171
-2022-08-26 14:12:45,033 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:33179
-2022-08-26 14:12:45,033 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-79e7b262-efa7-45f8-b31c-0c7aee1ba3ab Address tcp://127.0.0.1:37171 Status: Status.closing
-2022-08-26 14:12:45,034 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-b4cc9583-f2e3-44c2-b47b-4beb939a577c Address tcp://127.0.0.1:33179 Status: Status.closing
-2022-08-26 14:12:45,034 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:37171', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:45,034 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:37171
-2022-08-26 14:12:45,035 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:33179', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:45,035 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:33179
-2022-08-26 14:12:45,035 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:12:45,036 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:12:45,036 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_get_client_sync 2022-08-26 14:12:46,241 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:12:46,243 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:46,246 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:46,247 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:38591
-2022-08-26 14:12:46,247 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:12:46,276 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39503
-2022-08-26 14:12:46,277 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39503
-2022-08-26 14:12:46,277 - distributed.worker - INFO -          dashboard at:            127.0.0.1:37999
-2022-08-26 14:12:46,277 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:38591
-2022-08-26 14:12:46,277 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:46,277 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:46,277 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:46,277 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-eav68ito
-2022-08-26 14:12:46,277 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:46,302 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37961
-2022-08-26 14:12:46,303 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37961
-2022-08-26 14:12:46,303 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41289
-2022-08-26 14:12:46,303 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:38591
-2022-08-26 14:12:46,303 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:46,303 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:46,303 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:46,303 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-8c4wjhzo
-2022-08-26 14:12:46,303 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:46,587 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39503', status: init, memory: 0, processing: 0>
-2022-08-26 14:12:46,869 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39503
-2022-08-26 14:12:46,869 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:46,869 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:38591
-2022-08-26 14:12:46,869 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:46,870 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37961', status: init, memory: 0, processing: 0>
-2022-08-26 14:12:46,870 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37961
-2022-08-26 14:12:46,870 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:46,870 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:46,870 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:38591
-2022-08-26 14:12:46,870 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:46,871 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:46,876 - distributed.scheduler - INFO - Receive client connection: Client-d559e8e9-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:46,877 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:46,895 - distributed.scheduler - INFO - Receive client connection: Client-worker-d55c9393-2583-11ed-8358-00d861bc4509
-2022-08-26 14:12:46,895 - distributed.core - INFO - Starting established connection
-PASSED2022-08-26 14:12:46,992 - distributed.scheduler - INFO - Remove client Client-d559e8e9-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:46,993 - distributed.scheduler - INFO - Remove client Client-d559e8e9-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:46,993 - distributed.scheduler - INFO - Close client connection: Client-d559e8e9-2583-11ed-a99d-00d861bc4509
-
-distributed/tests/test_worker.py::test_get_client_coroutine 2022-08-26 14:12:47,005 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:47,007 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:47,007 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:37163
-2022-08-26 14:12:47,007 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:44703
-2022-08-26 14:12:47,008 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-eav68ito', purging
-2022-08-26 14:12:47,008 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-8c4wjhzo', purging
-2022-08-26 14:12:47,012 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:41589
-2022-08-26 14:12:47,012 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:41589
-2022-08-26 14:12:47,013 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:12:47,013 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33491
-2022-08-26 14:12:47,013 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37163
-2022-08-26 14:12:47,013 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:47,013 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:47,013 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:47,013 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-rh3eqqi4
-2022-08-26 14:12:47,013 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:47,013 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33119
-2022-08-26 14:12:47,013 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33119
-2022-08-26 14:12:47,013 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:12:47,014 - distributed.worker - INFO -          dashboard at:            127.0.0.1:37539
-2022-08-26 14:12:47,014 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37163
-2022-08-26 14:12:47,014 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:47,014 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:12:47,014 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:47,014 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-i4kgpkk0
-2022-08-26 14:12:47,014 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:47,017 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:41589', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:47,017 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:41589
-2022-08-26 14:12:47,017 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:47,017 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33119', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:47,018 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33119
-2022-08-26 14:12:47,018 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:47,018 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37163
-2022-08-26 14:12:47,018 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:47,018 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37163
-2022-08-26 14:12:47,018 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:47,019 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:47,019 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:47,032 - distributed.scheduler - INFO - Receive client connection: Client-d571c5b7-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:47,033 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:47,036 - distributed.worker - INFO - Run out-of-band function 'f'
-2022-08-26 14:12:47,037 - distributed.worker - INFO - Run out-of-band function 'f'
-2022-08-26 14:12:47,054 - distributed.scheduler - INFO - Remove client Client-d571c5b7-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:47,054 - distributed.scheduler - INFO - Remove client Client-d571c5b7-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:47,055 - distributed.scheduler - INFO - Close client connection: Client-d571c5b7-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:47,056 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:41589
-2022-08-26 14:12:47,056 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:33119
-2022-08-26 14:12:47,057 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:41589', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:47,057 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:41589
-2022-08-26 14:12:47,057 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-237c2a41-0be8-49bf-a52e-813dc98c0cca Address tcp://127.0.0.1:41589 Status: Status.closing
-2022-08-26 14:12:47,058 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-efb2fa9d-8a4d-4d2f-9984-e53936a86587 Address tcp://127.0.0.1:33119 Status: Status.closing
-2022-08-26 14:12:47,058 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:33119', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:47,058 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:33119
-2022-08-26 14:12:47,058 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:12:47,059 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:12:47,059 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_get_client_coroutine_sync 2022-08-26 14:12:48,270 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:12:48,272 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:48,275 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:48,275 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:34317
-2022-08-26 14:12:48,276 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:12:48,291 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42885
-2022-08-26 14:12:48,291 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42885
-2022-08-26 14:12:48,291 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46795
-2022-08-26 14:12:48,291 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34317
-2022-08-26 14:12:48,292 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:48,292 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:48,292 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:48,292 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-yajmqc6f
-2022-08-26 14:12:48,292 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:48,324 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33321
-2022-08-26 14:12:48,324 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33321
-2022-08-26 14:12:48,324 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35171
-2022-08-26 14:12:48,324 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34317
-2022-08-26 14:12:48,324 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:48,324 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:48,324 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:48,324 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-an755a_9
-2022-08-26 14:12:48,324 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:48,606 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42885', status: init, memory: 0, processing: 0>
-2022-08-26 14:12:48,896 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42885
-2022-08-26 14:12:48,897 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:48,897 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34317
-2022-08-26 14:12:48,897 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:48,897 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33321', status: init, memory: 0, processing: 0>
-2022-08-26 14:12:48,898 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33321
-2022-08-26 14:12:48,898 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:48,898 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:48,898 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34317
-2022-08-26 14:12:48,898 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:48,899 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:48,904 - distributed.scheduler - INFO - Receive client connection: Client-d68f4a7a-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:48,904 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:48,908 - distributed.worker - INFO - Run out-of-band function 'f'
-2022-08-26 14:12:48,908 - distributed.worker - INFO - Run out-of-band function 'f'
-2022-08-26 14:12:48,910 - distributed.scheduler - INFO - Receive client connection: Client-worker-d6905c66-2583-11ed-8375-00d861bc4509
-2022-08-26 14:12:48,911 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:48,911 - distributed.scheduler - INFO - Receive client connection: Client-worker-d690655f-2583-11ed-8374-00d861bc4509
-2022-08-26 14:12:48,911 - distributed.core - INFO - Starting established connection
-PASSED2022-08-26 14:12:49,006 - distributed.scheduler - INFO - Remove client Client-d68f4a7a-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:49,006 - distributed.scheduler - INFO - Remove client Client-d68f4a7a-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:49,006 - distributed.scheduler - INFO - Close client connection: Client-d68f4a7a-2583-11ed-a99d-00d861bc4509
-
-distributed/tests/test_worker.py::test_global_workers 2022-08-26 14:12:49,020 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:49,021 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:49,022 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:37089
-2022-08-26 14:12:49,022 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:46603
-2022-08-26 14:12:49,022 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-an755a_9', purging
-2022-08-26 14:12:49,023 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-yajmqc6f', purging
-2022-08-26 14:12:49,027 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39719
-2022-08-26 14:12:49,027 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39719
-2022-08-26 14:12:49,027 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:12:49,027 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41227
-2022-08-26 14:12:49,027 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37089
-2022-08-26 14:12:49,027 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:49,027 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:49,027 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:49,027 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-bnatbc8u
-2022-08-26 14:12:49,027 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:49,028 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38541
-2022-08-26 14:12:49,028 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38541
-2022-08-26 14:12:49,028 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:12:49,028 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39523
-2022-08-26 14:12:49,028 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37089
-2022-08-26 14:12:49,028 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:49,028 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:12:49,028 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:49,028 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-4a50g297
-2022-08-26 14:12:49,028 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:49,031 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39719', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:49,031 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39719
-2022-08-26 14:12:49,031 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:49,032 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38541', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:49,032 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38541
-2022-08-26 14:12:49,032 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:49,032 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37089
-2022-08-26 14:12:49,032 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:49,033 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37089
-2022-08-26 14:12:49,033 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:49,033 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:49,033 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:49,044 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:39719
-2022-08-26 14:12:49,044 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:38541
-2022-08-26 14:12:49,045 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:39719', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:49,045 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:39719
-2022-08-26 14:12:49,046 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:38541', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:49,046 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:38541
-2022-08-26 14:12:49,046 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:12:49,046 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-6a9ffdf4-8ec3-4f57-800b-e349d40247ee Address tcp://127.0.0.1:39719 Status: Status.closing
-2022-08-26 14:12:49,046 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-0facf562-2da1-4ac3-8ef2-7d1d2b27f035 Address tcp://127.0.0.1:38541 Status: Status.closing
-2022-08-26 14:12:49,047 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:12:49,047 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_worker_fds 2022-08-26 14:12:49,277 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:49,279 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:49,279 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:44591
-2022-08-26 14:12:49,279 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:38173
-2022-08-26 14:12:49,282 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:46875
-2022-08-26 14:12:49,282 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:46875
-2022-08-26 14:12:49,282 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41845
-2022-08-26 14:12:49,282 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44591
-2022-08-26 14:12:49,282 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:49,282 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:12:49,282 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:49,283 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-xd9eg53l
-2022-08-26 14:12:49,283 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:49,284 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:46875', status: init, memory: 0, processing: 0>
-2022-08-26 14:12:49,285 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:46875
-2022-08-26 14:12:49,285 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:49,285 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44591
-2022-08-26 14:12:49,285 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:49,285 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:46875
-2022-08-26 14:12:49,286 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:49,286 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-bc3bff89-0768-4b11-8a97-0cfa377dcb32 Address tcp://127.0.0.1:46875 Status: Status.closing
-2022-08-26 14:12:49,286 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:46875', status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:49,286 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:46875
-2022-08-26 14:12:49,287 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:12:49,287 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:12:49,287 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_service_hosts_match_worker 2022-08-26 14:12:49,519 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:49,521 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:49,521 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:39485
-2022-08-26 14:12:49,521 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:44385
-2022-08-26 14:12:49,524 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40391
-2022-08-26 14:12:49,524 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40391
-2022-08-26 14:12:49,524 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43695
-2022-08-26 14:12:49,524 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39485
-2022-08-26 14:12:49,524 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:49,524 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:12:49,524 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:49,524 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-7rdd_72z
-2022-08-26 14:12:49,524 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:49,526 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40391', status: init, memory: 0, processing: 0>
-2022-08-26 14:12:49,526 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40391
-2022-08-26 14:12:49,526 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:49,526 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39485
-2022-08-26 14:12:49,526 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:49,527 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:40391
-2022-08-26 14:12:49,527 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:49,527 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-0f402aaa-a6f4-43c1-9ee9-86a9c6bfb3e2 Address tcp://127.0.0.1:40391 Status: Status.closing
-2022-08-26 14:12:49,528 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:40391', status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:49,528 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:40391
-2022-08-26 14:12:49,528 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:12:49,531 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39881
-2022-08-26 14:12:49,531 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39881
-2022-08-26 14:12:49,531 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35147
-2022-08-26 14:12:49,531 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39485
-2022-08-26 14:12:49,531 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:49,531 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:12:49,531 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:49,531 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-fg_xdpej
-2022-08-26 14:12:49,531 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:49,533 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39881', status: init, memory: 0, processing: 0>
-2022-08-26 14:12:49,533 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39881
-2022-08-26 14:12:49,533 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:49,533 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39485
-2022-08-26 14:12:49,533 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:49,533 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:39881
-2022-08-26 14:12:49,534 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:49,534 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-2bc3dc11-e58e-43d3-8d37-164e42555d1d Address tcp://127.0.0.1:39881 Status: Status.closing
-2022-08-26 14:12:49,534 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:39881', status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:49,534 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:39881
-2022-08-26 14:12:49,534 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:12:49,537 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35893
-2022-08-26 14:12:49,537 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35893
-2022-08-26 14:12:49,537 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40255
-2022-08-26 14:12:49,537 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39485
-2022-08-26 14:12:49,537 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:49,537 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:12:49,537 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:49,537 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-r6c7i824
-2022-08-26 14:12:49,538 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:49,539 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:35893', status: init, memory: 0, processing: 0>
-2022-08-26 14:12:49,539 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:35893
-2022-08-26 14:12:49,540 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:49,540 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39485
-2022-08-26 14:12:49,540 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:49,540 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:35893
-2022-08-26 14:12:49,540 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:49,541 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-6a0f28d2-7d8d-4e53-a5bd-cd00523fcb9a Address tcp://127.0.0.1:35893 Status: Status.closing
-2022-08-26 14:12:49,541 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:35893', status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:49,541 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:35893
-2022-08-26 14:12:49,541 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:12:49,544 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39815
-2022-08-26 14:12:49,544 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39815
-2022-08-26 14:12:49,544 - distributed.worker - INFO -          dashboard at:            127.0.0.1:38177
-2022-08-26 14:12:49,544 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39485
-2022-08-26 14:12:49,544 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:49,544 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:12:49,544 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:49,544 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-_j8wk5g1
-2022-08-26 14:12:49,544 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:49,546 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39815', status: init, memory: 0, processing: 0>
-2022-08-26 14:12:49,546 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39815
-2022-08-26 14:12:49,546 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:49,547 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39485
-2022-08-26 14:12:49,547 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:49,547 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:39815
-2022-08-26 14:12:49,547 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:49,547 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-3cfca835-3c12-4d1b-a74c-e0265fbb4858 Address tcp://127.0.0.1:39815 Status: Status.closing
-2022-08-26 14:12:49,548 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:39815', status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:49,548 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:39815
-2022-08-26 14:12:49,548 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:12:49,549 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:12:49,549 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_start_services 2022-08-26 14:12:49,779 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:49,781 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:49,781 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:38873
-2022-08-26 14:12:49,781 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:44629
-2022-08-26 14:12:49,784 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37915
-2022-08-26 14:12:49,784 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37915
-2022-08-26 14:12:49,784 - distributed.worker - INFO -          dashboard at:             127.0.0.1:1234
-2022-08-26 14:12:49,784 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:38873
-2022-08-26 14:12:49,784 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:49,784 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:12:49,784 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:49,784 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-bvgf53bd
-2022-08-26 14:12:49,784 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:49,786 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37915', status: init, memory: 0, processing: 0>
-2022-08-26 14:12:49,787 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37915
-2022-08-26 14:12:49,787 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:49,787 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:38873
-2022-08-26 14:12:49,787 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:49,787 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:37915
-2022-08-26 14:12:49,788 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:49,788 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-655c6fbf-479e-409c-9111-2911d4198b07 Address tcp://127.0.0.1:37915 Status: Status.closing
-2022-08-26 14:12:49,788 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:37915', status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:49,788 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:37915
-2022-08-26 14:12:49,788 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:12:49,789 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:12:49,789 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_scheduler_file 2022-08-26 14:12:50,038 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:50,040 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:50,040 - distributed.scheduler - INFO -   Scheduler at: tcp://192.168.1.159:42345
-2022-08-26 14:12:50,040 - distributed.scheduler - INFO -   dashboard at:                    :44253
-2022-08-26 14:12:50,043 - distributed.worker - INFO -       Start worker at:  tcp://192.168.1.159:42387
-2022-08-26 14:12:50,043 - distributed.worker - INFO -          Listening to:  tcp://192.168.1.159:42387
-2022-08-26 14:12:50,043 - distributed.worker - INFO -          dashboard at:        192.168.1.159:38443
-2022-08-26 14:12:50,043 - distributed.worker - INFO - Waiting to connect to:  tcp://192.168.1.159:42345
-2022-08-26 14:12:50,043 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:50,044 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:12:50,044 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:50,044 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-1q6hete1
-2022-08-26 14:12:50,044 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:50,046 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://192.168.1.159:42387', status: init, memory: 0, processing: 0>
-2022-08-26 14:12:50,046 - distributed.scheduler - INFO - Starting worker compute stream, tcp://192.168.1.159:42387
-2022-08-26 14:12:50,046 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:50,046 - distributed.worker - INFO -         Registered to:  tcp://192.168.1.159:42345
-2022-08-26 14:12:50,046 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:50,046 - distributed.worker - INFO - Stopping worker at tcp://192.168.1.159:42387
-2022-08-26 14:12:50,047 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:50,047 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-26553194-9738-4db7-87d3-6941117e05dc Address tcp://192.168.1.159:42387 Status: Status.closing
-2022-08-26 14:12:50,047 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://192.168.1.159:42387', status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:50,048 - distributed.core - INFO - Removing comms to tcp://192.168.1.159:42387
-2022-08-26 14:12:50,048 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:12:50,048 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:12:50,048 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_scheduler_delay 2022-08-26 14:12:50,054 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:50,055 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:50,056 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:45247
-2022-08-26 14:12:50,056 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:37093
-2022-08-26 14:12:50,060 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39779
-2022-08-26 14:12:50,060 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39779
-2022-08-26 14:12:50,060 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:12:50,060 - distributed.worker - INFO -          dashboard at:            127.0.0.1:32993
-2022-08-26 14:12:50,060 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:45247
-2022-08-26 14:12:50,060 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:50,060 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:50,060 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:50,061 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ytknvpzt
-2022-08-26 14:12:50,061 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:50,061 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:32781
-2022-08-26 14:12:50,061 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:32781
-2022-08-26 14:12:50,061 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:12:50,061 - distributed.worker - INFO -          dashboard at:            127.0.0.1:44615
-2022-08-26 14:12:50,061 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:45247
-2022-08-26 14:12:50,061 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:50,061 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:12:50,061 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:50,061 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-g8wsq8vl
-2022-08-26 14:12:50,062 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:50,064 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39779', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:50,065 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39779
-2022-08-26 14:12:50,065 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:50,065 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:32781', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:50,065 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:32781
-2022-08-26 14:12:50,065 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:50,066 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:45247
-2022-08-26 14:12:50,066 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:50,066 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:45247
-2022-08-26 14:12:50,066 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:50,066 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:50,066 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:50,081 - distributed.scheduler - INFO - Receive client connection: Client-d742cb2e-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:50,082 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:51,683 - distributed.scheduler - INFO - Remove client Client-d742cb2e-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:51,684 - distributed.scheduler - INFO - Remove client Client-d742cb2e-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:51,684 - distributed.scheduler - INFO - Close client connection: Client-d742cb2e-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:51,684 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:39779
-2022-08-26 14:12:51,685 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:32781
-2022-08-26 14:12:51,686 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:39779', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:51,686 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:39779
-2022-08-26 14:12:51,686 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:32781', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:51,686 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:32781
-2022-08-26 14:12:51,686 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:12:51,686 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-2c09eab7-0342-4365-9255-5927af19267e Address tcp://127.0.0.1:39779 Status: Status.closing
-2022-08-26 14:12:51,686 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-a9da2b1e-2b30-49df-8f07-464c22feadcd Address tcp://127.0.0.1:32781 Status: Status.closing
-2022-08-26 14:12:51,687 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:12:51,688 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_statistical_profiling 2022-08-26 14:12:51,917 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:51,919 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:51,919 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:40325
-2022-08-26 14:12:51,919 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:46645
-2022-08-26 14:12:51,924 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38353
-2022-08-26 14:12:51,924 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38353
-2022-08-26 14:12:51,924 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:12:51,924 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43593
-2022-08-26 14:12:51,924 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:40325
-2022-08-26 14:12:51,924 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:51,924 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:51,924 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:51,924 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-329hcaed
-2022-08-26 14:12:51,924 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:51,925 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:41663
-2022-08-26 14:12:51,925 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:41663
-2022-08-26 14:12:51,925 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:12:51,925 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46855
-2022-08-26 14:12:51,925 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:40325
-2022-08-26 14:12:51,925 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:51,925 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:12:51,925 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:51,925 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-kr9ujwwl
-2022-08-26 14:12:51,925 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:51,928 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38353', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:51,928 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38353
-2022-08-26 14:12:51,928 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:51,929 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:41663', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:51,929 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:41663
-2022-08-26 14:12:51,929 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:51,929 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:40325
-2022-08-26 14:12:51,929 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:51,929 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:40325
-2022-08-26 14:12:51,930 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:51,930 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:51,930 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:51,944 - distributed.scheduler - INFO - Receive client connection: Client-d85f2f26-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:51,944 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:52,375 - distributed.scheduler - INFO - Remove client Client-d85f2f26-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:52,375 - distributed.scheduler - INFO - Remove client Client-d85f2f26-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:52,376 - distributed.scheduler - INFO - Close client connection: Client-d85f2f26-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:52,376 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:38353
-2022-08-26 14:12:52,376 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:41663
-2022-08-26 14:12:52,377 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:38353', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:52,378 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:38353
-2022-08-26 14:12:52,378 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:41663', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:52,378 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:41663
-2022-08-26 14:12:52,378 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:12:52,378 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-3582f97b-9ec9-453a-8b7e-4631ae9a385a Address tcp://127.0.0.1:38353 Status: Status.closing
-2022-08-26 14:12:52,378 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-5dc38b6d-6cb0-4a93-b317-90232c117cb1 Address tcp://127.0.0.1:41663 Status: Status.closing
-2022-08-26 14:12:52,379 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:12:52,380 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_statistical_profiling_2 SKIPPED
-distributed/tests/test_worker.py::test_statistical_profiling_cycle 2022-08-26 14:12:52,622 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:52,624 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:52,624 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:37327
-2022-08-26 14:12:52,624 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:46505
-2022-08-26 14:12:52,628 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36193
-2022-08-26 14:12:52,628 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36193
-2022-08-26 14:12:52,628 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:12:52,628 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40859
-2022-08-26 14:12:52,629 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37327
-2022-08-26 14:12:52,629 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:52,629 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:52,629 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:52,629 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-5nkdl32j
-2022-08-26 14:12:52,629 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:52,629 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35663
-2022-08-26 14:12:52,629 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35663
-2022-08-26 14:12:52,629 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:12:52,629 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45219
-2022-08-26 14:12:52,630 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37327
-2022-08-26 14:12:52,630 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:52,630 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:12:52,630 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:52,630 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ofvpxldo
-2022-08-26 14:12:52,630 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:52,633 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36193', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:52,633 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36193
-2022-08-26 14:12:52,633 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:52,633 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:35663', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:52,634 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:35663
-2022-08-26 14:12:52,634 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:52,634 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37327
-2022-08-26 14:12:52,634 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:52,634 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37327
-2022-08-26 14:12:52,634 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:52,635 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:52,635 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:52,649 - distributed.scheduler - INFO - Receive client connection: Client-d8cabeb1-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:52,649 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:53,049 - distributed.scheduler - INFO - Remove client Client-d8cabeb1-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:53,049 - distributed.scheduler - INFO - Remove client Client-d8cabeb1-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:53,049 - distributed.scheduler - INFO - Close client connection: Client-d8cabeb1-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:53,050 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36193
-2022-08-26 14:12:53,050 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:35663
-2022-08-26 14:12:53,051 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:36193', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:53,051 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:36193
-2022-08-26 14:12:53,051 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:35663', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:53,051 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:35663
-2022-08-26 14:12:53,051 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:12:53,052 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-a94f5400-ebf6-46db-9e7d-a3734f758cc0 Address tcp://127.0.0.1:36193 Status: Status.closing
-2022-08-26 14:12:53,052 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-47a723ca-c51e-4ae9-9ef0-3a90828c1145 Address tcp://127.0.0.1:35663 Status: Status.closing
-2022-08-26 14:12:53,053 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:12:53,053 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_get_current_task 2022-08-26 14:12:53,295 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:53,297 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:53,297 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:39525
-2022-08-26 14:12:53,297 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:41331
-2022-08-26 14:12:53,301 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43299
-2022-08-26 14:12:53,302 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43299
-2022-08-26 14:12:53,302 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:12:53,302 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45611
-2022-08-26 14:12:53,302 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39525
-2022-08-26 14:12:53,302 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:53,302 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:53,302 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:53,302 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-u7y5mgxb
-2022-08-26 14:12:53,302 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:53,302 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42941
-2022-08-26 14:12:53,302 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42941
-2022-08-26 14:12:53,303 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:12:53,303 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43853
-2022-08-26 14:12:53,303 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39525
-2022-08-26 14:12:53,303 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:53,303 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:12:53,303 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:53,303 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-nlglef3_
-2022-08-26 14:12:53,303 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:53,306 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:43299', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:53,306 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:43299
-2022-08-26 14:12:53,306 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:53,306 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42941', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:53,307 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42941
-2022-08-26 14:12:53,307 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:53,307 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39525
-2022-08-26 14:12:53,307 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:53,307 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39525
-2022-08-26 14:12:53,307 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:53,308 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:53,308 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:53,321 - distributed.scheduler - INFO - Receive client connection: Client-d9316673-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:53,322 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:53,343 - distributed.scheduler - INFO - Remove client Client-d9316673-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:53,343 - distributed.scheduler - INFO - Remove client Client-d9316673-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:53,344 - distributed.scheduler - INFO - Close client connection: Client-d9316673-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:53,344 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:43299
-2022-08-26 14:12:53,345 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42941
-2022-08-26 14:12:53,346 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:43299', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:53,346 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:43299
-2022-08-26 14:12:53,346 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-a43f8268-d6fc-4aeb-960a-60275d0f0113 Address tcp://127.0.0.1:43299 Status: Status.closing
-2022-08-26 14:12:53,346 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-6ab95261-8fb2-45b3-9b03-30970de80fd4 Address tcp://127.0.0.1:42941 Status: Status.closing
-2022-08-26 14:12:53,347 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:42941', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:53,347 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:42941
-2022-08-26 14:12:53,347 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:12:53,347 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:12:53,347 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_deque_handler 2022-08-26 14:12:53,579 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:53,580 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:53,581 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:35815
-2022-08-26 14:12:53,581 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:43615
-2022-08-26 14:12:53,583 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42899
-2022-08-26 14:12:53,583 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42899
-2022-08-26 14:12:53,584 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33831
-2022-08-26 14:12:53,584 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35815
-2022-08-26 14:12:53,584 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:53,584 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:12:53,584 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:53,584 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-dg_j560v
-2022-08-26 14:12:53,584 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:53,586 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42899', status: init, memory: 0, processing: 0>
-2022-08-26 14:12:53,586 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42899
-2022-08-26 14:12:53,586 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:53,586 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35815
-2022-08-26 14:12:53,586 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:53,586 - distributed.worker - INFO - foo456
-2022-08-26 14:12:53,586 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42899
-2022-08-26 14:12:53,587 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:53,587 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-05806257-254c-4667-aab7-a336a0409da6 Address tcp://127.0.0.1:42899 Status: Status.closing
-2022-08-26 14:12:53,588 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:42899', status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:53,588 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:42899
-2022-08-26 14:12:53,588 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:12:53,588 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:12:53,588 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_get_worker_name 2022-08-26 14:12:54,797 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:12:54,799 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:54,802 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:54,803 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:41137
-2022-08-26 14:12:54,803 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:12:54,820 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34909
-2022-08-26 14:12:54,820 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34909
-2022-08-26 14:12:54,820 - distributed.worker - INFO -          dashboard at:            127.0.0.1:44013
-2022-08-26 14:12:54,820 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:41137
-2022-08-26 14:12:54,820 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:54,820 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:54,820 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:54,820 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-k9iz25mi
-2022-08-26 14:12:54,820 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:54,851 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36803
-2022-08-26 14:12:54,851 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36803
-2022-08-26 14:12:54,851 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40645
-2022-08-26 14:12:54,851 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:41137
-2022-08-26 14:12:54,851 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:54,851 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:54,851 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:54,851 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-9h2oo70m
-2022-08-26 14:12:54,851 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:55,130 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:34909', status: init, memory: 0, processing: 0>
-2022-08-26 14:12:55,418 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:34909
-2022-08-26 14:12:55,419 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:55,419 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:41137
-2022-08-26 14:12:55,419 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:55,419 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36803', status: init, memory: 0, processing: 0>
-2022-08-26 14:12:55,420 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36803
-2022-08-26 14:12:55,420 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:55,420 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:55,420 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:41137
-2022-08-26 14:12:55,420 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:55,421 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:55,425 - distributed.scheduler - INFO - Receive client connection: Client-da727743-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:55,426 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:55,429 - distributed.worker - INFO - Run out-of-band function 'f'
-2022-08-26 14:12:55,429 - distributed.worker - INFO - Run out-of-band function 'f'
-/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/tests/test_worker.py:1212: RuntimeWarning: coroutine 'Future._result' was never awaited
-  get_client().submit(inc, 1).result()
-RuntimeWarning: Enable tracemalloc to get the object allocation traceback
-/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/tests/test_worker.py:1212: RuntimeWarning: coroutine 'Future._result' was never awaited
-  get_client().submit(inc, 1).result()
-RuntimeWarning: Enable tracemalloc to get the object allocation traceback
-2022-08-26 14:12:55,508 - distributed.worker - INFO - Run out-of-band function 'func'
-2022-08-26 14:12:55,509 - distributed.scheduler - INFO - Receive client connection: Client-worker-da736155-2583-11ed-8457-00d861bc4509
-2022-08-26 14:12:55,509 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:55,509 - distributed.scheduler - INFO - Receive client connection: Client-worker-da736483-2583-11ed-8458-00d861bc4509
-2022-08-26 14:12:55,510 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:55,609 - distributed.worker - INFO - Run out-of-band function 'func'
-PASSED2022-08-26 14:12:55,610 - distributed.scheduler - INFO - Remove client Client-da727743-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:55,610 - distributed.scheduler - INFO - Remove client Client-da727743-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:55,610 - distributed.scheduler - INFO - Close client connection: Client-da727743-2583-11ed-a99d-00d861bc4509
-
-distributed/tests/test_worker.py::test_scheduler_address_config 2022-08-26 14:12:55,623 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:55,625 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:55,625 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:42041
-2022-08-26 14:12:55,625 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:38947
-2022-08-26 14:12:55,628 - distributed.scheduler - INFO - Receive client connection: Client-da916c47-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:55,629 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:55,630 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-9h2oo70m', purging
-2022-08-26 14:12:55,630 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-k9iz25mi', purging
-2022-08-26 14:12:55,632 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37127
-2022-08-26 14:12:55,632 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37127
-2022-08-26 14:12:55,632 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34329
-2022-08-26 14:12:55,632 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:42041
-2022-08-26 14:12:55,632 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:55,632 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:12:55,633 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:55,633 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-qfktje19
-2022-08-26 14:12:55,633 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:55,635 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37127', status: init, memory: 0, processing: 0>
-2022-08-26 14:12:55,635 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37127
-2022-08-26 14:12:55,635 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:55,635 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:42041
-2022-08-26 14:12:55,635 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:55,635 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:37127
-2022-08-26 14:12:55,636 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:55,636 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-b3695fd2-0214-4bca-a7b0-c7229df2f976 Address tcp://127.0.0.1:37127 Status: Status.closing
-2022-08-26 14:12:55,637 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:37127', status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:55,637 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:37127
-2022-08-26 14:12:55,637 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:12:55,641 - distributed.scheduler - INFO - Remove client Client-da916c47-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:55,641 - distributed.scheduler - INFO - Remove client Client-da916c47-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:55,641 - distributed.scheduler - INFO - Close client connection: Client-da916c47-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:55,641 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:12:55,642 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_wait_for_outgoing SKIPPED (ne...)
-distributed/tests/test_worker.py::test_prefer_gather_from_local_address 2022-08-26 14:12:55,871 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:55,873 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:55,873 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:36215
-2022-08-26 14:12:55,873 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:44359
-2022-08-26 14:12:55,879 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35165
-2022-08-26 14:12:55,879 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35165
-2022-08-26 14:12:55,879 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:12:55,879 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43447
-2022-08-26 14:12:55,879 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36215
-2022-08-26 14:12:55,879 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:55,879 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:55,879 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:55,879 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-jy3fq4yj
-2022-08-26 14:12:55,879 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:55,880 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40963
-2022-08-26 14:12:55,880 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40963
-2022-08-26 14:12:55,880 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:12:55,880 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40849
-2022-08-26 14:12:55,880 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36215
-2022-08-26 14:12:55,880 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:55,880 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:55,880 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:55,880 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-q53auleq
-2022-08-26 14:12:55,880 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:55,881 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.2:35141
-2022-08-26 14:12:55,881 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.2:35141
-2022-08-26 14:12:55,881 - distributed.worker - INFO -           Worker name:                          2
-2022-08-26 14:12:55,881 - distributed.worker - INFO -          dashboard at:            127.0.0.2:40847
-2022-08-26 14:12:55,881 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36215
-2022-08-26 14:12:55,881 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:55,881 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:55,881 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:55,881 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-s34f1e9n
-2022-08-26 14:12:55,881 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:55,885 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:35165', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:55,885 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:35165
-2022-08-26 14:12:55,885 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:55,886 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40963', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:55,886 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40963
-2022-08-26 14:12:55,886 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:55,886 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.2:35141', name: 2, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:55,887 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.2:35141
-2022-08-26 14:12:55,887 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:55,887 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36215
-2022-08-26 14:12:55,887 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:55,887 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36215
-2022-08-26 14:12:55,887 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:55,888 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36215
-2022-08-26 14:12:55,888 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:55,888 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:55,888 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:55,888 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:55,902 - distributed.scheduler - INFO - Receive client connection: Client-dabb2978-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:55,902 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:55,924 - distributed.scheduler - INFO - Remove client Client-dabb2978-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:55,925 - distributed.scheduler - INFO - Remove client Client-dabb2978-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:55,925 - distributed.scheduler - INFO - Close client connection: Client-dabb2978-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:55,927 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:35165
-2022-08-26 14:12:55,927 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:40963
-2022-08-26 14:12:55,927 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.2:35141
-2022-08-26 14:12:55,928 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-ced5f364-68d4-47e5-8d58-1ece0bb0682f Address tcp://127.0.0.1:35165 Status: Status.closing
-2022-08-26 14:12:55,928 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-460bd9ec-6d5d-490e-af00-1e6eabd27242 Address tcp://127.0.0.1:40963 Status: Status.closing
-2022-08-26 14:12:55,929 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-ed5a982f-e990-4403-9125-f45c41d7202d Address tcp://127.0.0.2:35141 Status: Status.closing
-2022-08-26 14:12:55,929 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:35165', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:55,929 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:35165
-2022-08-26 14:12:55,929 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:40963', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:55,929 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:40963
-2022-08-26 14:12:55,930 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.2:35141', name: 2, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:55,930 - distributed.core - INFO - Removing comms to tcp://127.0.0.2:35141
-2022-08-26 14:12:55,930 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:12:55,931 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:12:55,931 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_avoid_oversubscription 2022-08-26 14:12:56,161 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:56,162 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:56,163 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:45325
-2022-08-26 14:12:56,163 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:38155
-2022-08-26 14:12:56,200 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37739
-2022-08-26 14:12:56,201 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37739
-2022-08-26 14:12:56,201 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:12:56,201 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39013
-2022-08-26 14:12:56,201 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:45325
-2022-08-26 14:12:56,201 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:56,201 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:56,201 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:56,201 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-7pjt3oe1
-2022-08-26 14:12:56,201 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:56,202 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40545
-2022-08-26 14:12:56,202 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40545
-2022-08-26 14:12:56,202 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:12:56,202 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45447
-2022-08-26 14:12:56,202 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:45325
-2022-08-26 14:12:56,202 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:56,202 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:56,202 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:56,202 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-urpkwr2z
-2022-08-26 14:12:56,202 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:56,203 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:44941
-2022-08-26 14:12:56,203 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:44941
-2022-08-26 14:12:56,203 - distributed.worker - INFO -           Worker name:                          2
-2022-08-26 14:12:56,203 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42923
-2022-08-26 14:12:56,203 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:45325
-2022-08-26 14:12:56,203 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:56,203 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:56,203 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:56,203 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-1rg97ms4
-2022-08-26 14:12:56,203 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:56,204 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43097
-2022-08-26 14:12:56,204 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43097
-2022-08-26 14:12:56,204 - distributed.worker - INFO -           Worker name:                          3
-2022-08-26 14:12:56,204 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45769
-2022-08-26 14:12:56,204 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:45325
-2022-08-26 14:12:56,204 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:56,204 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:56,204 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:56,204 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-1bx9_abe
-2022-08-26 14:12:56,205 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:56,205 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42237
-2022-08-26 14:12:56,205 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42237
-2022-08-26 14:12:56,205 - distributed.worker - INFO -           Worker name:                          4
-2022-08-26 14:12:56,205 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42335
-2022-08-26 14:12:56,205 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:45325
-2022-08-26 14:12:56,205 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:56,205 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:56,206 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:56,206 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-8c91mnn9
-2022-08-26 14:12:56,206 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:56,206 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:41447
-2022-08-26 14:12:56,206 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:41447
-2022-08-26 14:12:56,206 - distributed.worker - INFO -           Worker name:                          5
-2022-08-26 14:12:56,206 - distributed.worker - INFO -          dashboard at:            127.0.0.1:38609
-2022-08-26 14:12:56,206 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:45325
-2022-08-26 14:12:56,206 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:56,207 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:56,207 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:56,207 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-hsuaxpp8
-2022-08-26 14:12:56,207 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:56,207 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36123
-2022-08-26 14:12:56,207 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36123
-2022-08-26 14:12:56,207 - distributed.worker - INFO -           Worker name:                          6
-2022-08-26 14:12:56,207 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42327
-2022-08-26 14:12:56,208 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:45325
-2022-08-26 14:12:56,208 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:56,208 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:56,208 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:56,208 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-n7i3bbrn
-2022-08-26 14:12:56,208 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:56,208 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35705
-2022-08-26 14:12:56,208 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35705
-2022-08-26 14:12:56,208 - distributed.worker - INFO -           Worker name:                          7
-2022-08-26 14:12:56,209 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33301
-2022-08-26 14:12:56,209 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:45325
-2022-08-26 14:12:56,209 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:56,209 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:56,209 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:56,209 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-o76qkp70
-2022-08-26 14:12:56,209 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:56,209 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:46761
-2022-08-26 14:12:56,210 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:46761
-2022-08-26 14:12:56,210 - distributed.worker - INFO -           Worker name:                          8
-2022-08-26 14:12:56,210 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45191
-2022-08-26 14:12:56,210 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:45325
-2022-08-26 14:12:56,210 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:56,210 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:56,210 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:56,210 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-jj5tovee
-2022-08-26 14:12:56,210 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:56,211 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38565
-2022-08-26 14:12:56,211 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38565
-2022-08-26 14:12:56,211 - distributed.worker - INFO -           Worker name:                          9
-2022-08-26 14:12:56,211 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35987
-2022-08-26 14:12:56,211 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:45325
-2022-08-26 14:12:56,211 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:56,211 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:56,211 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:56,211 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-_694zkl3
-2022-08-26 14:12:56,211 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:56,212 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:32895
-2022-08-26 14:12:56,212 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:32895
-2022-08-26 14:12:56,212 - distributed.worker - INFO -           Worker name:                         10
-2022-08-26 14:12:56,212 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41721
-2022-08-26 14:12:56,212 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:45325
-2022-08-26 14:12:56,212 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:56,212 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:56,212 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:56,212 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-c34u6s5q
-2022-08-26 14:12:56,212 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:56,213 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40957
-2022-08-26 14:12:56,213 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40957
-2022-08-26 14:12:56,213 - distributed.worker - INFO -           Worker name:                         11
-2022-08-26 14:12:56,213 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36453
-2022-08-26 14:12:56,213 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:45325
-2022-08-26 14:12:56,213 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:56,213 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:56,213 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:56,213 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ac6gev0l
-2022-08-26 14:12:56,213 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:56,214 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45727
-2022-08-26 14:12:56,214 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45727
-2022-08-26 14:12:56,214 - distributed.worker - INFO -           Worker name:                         12
-2022-08-26 14:12:56,214 - distributed.worker - INFO -          dashboard at:            127.0.0.1:32865
-2022-08-26 14:12:56,214 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:45325
-2022-08-26 14:12:56,214 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:56,214 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:56,214 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:56,214 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ub0wdxet
-2022-08-26 14:12:56,214 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:56,215 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40741
-2022-08-26 14:12:56,215 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40741
-2022-08-26 14:12:56,215 - distributed.worker - INFO -           Worker name:                         13
-2022-08-26 14:12:56,215 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45303
-2022-08-26 14:12:56,215 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:45325
-2022-08-26 14:12:56,215 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:56,215 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:56,215 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:56,215 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ykf9scww
-2022-08-26 14:12:56,216 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:56,216 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38115
-2022-08-26 14:12:56,216 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38115
-2022-08-26 14:12:56,216 - distributed.worker - INFO -           Worker name:                         14
-2022-08-26 14:12:56,216 - distributed.worker - INFO -          dashboard at:            127.0.0.1:38649
-2022-08-26 14:12:56,216 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:45325
-2022-08-26 14:12:56,216 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:56,216 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:56,217 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:56,217 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-oasmz_bs
-2022-08-26 14:12:56,217 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:56,217 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42357
-2022-08-26 14:12:56,217 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42357
-2022-08-26 14:12:56,217 - distributed.worker - INFO -           Worker name:                         15
-2022-08-26 14:12:56,217 - distributed.worker - INFO -          dashboard at:            127.0.0.1:44757
-2022-08-26 14:12:56,217 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:45325
-2022-08-26 14:12:56,218 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:56,218 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:56,218 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:56,218 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-w8a5wrko
-2022-08-26 14:12:56,218 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:56,218 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40389
-2022-08-26 14:12:56,218 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40389
-2022-08-26 14:12:56,218 - distributed.worker - INFO -           Worker name:                         16
-2022-08-26 14:12:56,218 - distributed.worker - INFO -          dashboard at:            127.0.0.1:44799
-2022-08-26 14:12:56,219 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:45325
-2022-08-26 14:12:56,219 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:56,219 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:56,219 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:56,219 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-nsmjszug
-2022-08-26 14:12:56,219 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:56,219 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33469
-2022-08-26 14:12:56,219 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33469
-2022-08-26 14:12:56,220 - distributed.worker - INFO -           Worker name:                         17
-2022-08-26 14:12:56,220 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43129
-2022-08-26 14:12:56,220 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:45325
-2022-08-26 14:12:56,220 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:56,220 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:56,220 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:56,220 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-rfbjq6__
-2022-08-26 14:12:56,220 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:56,220 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38265
-2022-08-26 14:12:56,221 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38265
-2022-08-26 14:12:56,221 - distributed.worker - INFO -           Worker name:                         18
-2022-08-26 14:12:56,221 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36985
-2022-08-26 14:12:56,221 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:45325
-2022-08-26 14:12:56,221 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:56,221 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:56,221 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:56,221 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-lgi7qevc
-2022-08-26 14:12:56,221 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:56,222 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:44015
-2022-08-26 14:12:56,222 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:44015
-2022-08-26 14:12:56,222 - distributed.worker - INFO -           Worker name:                         19
-2022-08-26 14:12:56,222 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35389
-2022-08-26 14:12:56,222 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:45325
-2022-08-26 14:12:56,222 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:56,222 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:56,223 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:56,223 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-pwpnoraz
-2022-08-26 14:12:56,223 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:56,242 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37739', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:56,242 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37739
-2022-08-26 14:12:56,242 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:56,243 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40545', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:56,243 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40545
-2022-08-26 14:12:56,243 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:56,243 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:44941', name: 2, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:56,243 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:44941
-2022-08-26 14:12:56,243 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:56,244 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:43097', name: 3, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:56,244 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:43097
-2022-08-26 14:12:56,244 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:56,244 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42237', name: 4, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:56,245 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42237
-2022-08-26 14:12:56,245 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:56,245 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:41447', name: 5, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:56,245 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:41447
-2022-08-26 14:12:56,245 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:56,246 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36123', name: 6, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:56,246 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36123
-2022-08-26 14:12:56,246 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:56,246 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:35705', name: 7, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:56,247 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:35705
-2022-08-26 14:12:56,247 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:56,247 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:46761', name: 8, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:56,247 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:46761
-2022-08-26 14:12:56,247 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:56,248 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38565', name: 9, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:56,248 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38565
-2022-08-26 14:12:56,248 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:56,248 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:32895', name: 10, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:56,249 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:32895
-2022-08-26 14:12:56,249 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:56,249 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40957', name: 11, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:56,249 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40957
-2022-08-26 14:12:56,249 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:56,250 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45727', name: 12, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:56,250 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45727
-2022-08-26 14:12:56,250 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:56,250 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40741', name: 13, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:56,251 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40741
-2022-08-26 14:12:56,251 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:56,251 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38115', name: 14, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:56,251 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38115
-2022-08-26 14:12:56,251 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:56,252 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42357', name: 15, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:56,252 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42357
-2022-08-26 14:12:56,252 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:56,252 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40389', name: 16, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:56,253 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40389
-2022-08-26 14:12:56,253 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:56,253 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33469', name: 17, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:56,253 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33469
-2022-08-26 14:12:56,253 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:56,254 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38265', name: 18, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:56,254 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38265
-2022-08-26 14:12:56,254 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:56,254 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:44015', name: 19, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:56,255 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:44015
-2022-08-26 14:12:56,255 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:56,256 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:45325
-2022-08-26 14:12:56,256 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:56,256 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:45325
-2022-08-26 14:12:56,256 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:56,256 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:45325
-2022-08-26 14:12:56,256 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:56,256 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:45325
-2022-08-26 14:12:56,257 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:56,257 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:45325
-2022-08-26 14:12:56,257 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:56,257 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:45325
-2022-08-26 14:12:56,257 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:56,257 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:45325
-2022-08-26 14:12:56,257 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:56,258 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:45325
-2022-08-26 14:12:56,258 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:56,258 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:45325
-2022-08-26 14:12:56,258 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:56,258 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:45325
-2022-08-26 14:12:56,258 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:56,259 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:45325
-2022-08-26 14:12:56,259 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:56,259 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:45325
-2022-08-26 14:12:56,259 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:56,259 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:45325
-2022-08-26 14:12:56,259 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:56,259 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:45325
-2022-08-26 14:12:56,259 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:56,260 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:45325
-2022-08-26 14:12:56,260 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:56,260 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:45325
-2022-08-26 14:12:56,260 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:56,260 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:45325
-2022-08-26 14:12:56,260 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:56,261 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:45325
-2022-08-26 14:12:56,261 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:56,261 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:45325
-2022-08-26 14:12:56,261 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:56,261 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:45325
-2022-08-26 14:12:56,261 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:56,262 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:56,262 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:56,262 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:56,262 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:56,262 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:56,262 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:56,262 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:56,262 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:56,263 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:56,263 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:56,263 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:56,263 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:56,263 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:56,263 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:56,263 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:56,263 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:56,263 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:56,263 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:56,263 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:56,263 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:56,278 - distributed.scheduler - INFO - Receive client connection: Client-daf49129-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:56,279 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:56,699 - distributed.scheduler - INFO - Remove client Client-daf49129-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:56,699 - distributed.scheduler - INFO - Remove client Client-daf49129-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:56,700 - distributed.scheduler - INFO - Close client connection: Client-daf49129-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:56,700 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:37739
-2022-08-26 14:12:56,701 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:40545
-2022-08-26 14:12:56,701 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:44941
-2022-08-26 14:12:56,701 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:43097
-2022-08-26 14:12:56,701 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42237
-2022-08-26 14:12:56,702 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:41447
-2022-08-26 14:12:56,702 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36123
-2022-08-26 14:12:56,702 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:35705
-2022-08-26 14:12:56,703 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:46761
-2022-08-26 14:12:56,703 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:38565
-2022-08-26 14:12:56,703 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:32895
-2022-08-26 14:12:56,703 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:40957
-2022-08-26 14:12:56,703 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:45727
-2022-08-26 14:12:56,704 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:40741
-2022-08-26 14:12:56,704 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:38115
-2022-08-26 14:12:56,704 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42357
-2022-08-26 14:12:56,704 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:40389
-2022-08-26 14:12:56,704 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:33469
-2022-08-26 14:12:56,704 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:38265
-2022-08-26 14:12:56,704 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:44015
-2022-08-26 14:12:56,711 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:37739', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:56,712 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:37739
-2022-08-26 14:12:56,712 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:40545', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:56,712 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:40545
-2022-08-26 14:12:56,712 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:44941', name: 2, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:56,712 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:44941
-2022-08-26 14:12:56,712 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:43097', name: 3, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:56,712 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:43097
-2022-08-26 14:12:56,712 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:42237', name: 4, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:56,712 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:42237
-2022-08-26 14:12:56,713 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:41447', name: 5, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:56,713 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:41447
-2022-08-26 14:12:56,713 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:36123', name: 6, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:56,713 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:36123
-2022-08-26 14:12:56,713 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:35705', name: 7, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:56,713 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:35705
-2022-08-26 14:12:56,713 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:46761', name: 8, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:56,713 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:46761
-2022-08-26 14:12:56,714 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-50c8d680-fbd1-4ba3-bf89-8907d9fb4fca Address tcp://127.0.0.1:37739 Status: Status.closing
-2022-08-26 14:12:56,714 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-989e5565-61fa-4d97-8ec7-5a6c0d4669da Address tcp://127.0.0.1:40545 Status: Status.closing
-2022-08-26 14:12:56,714 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-38f9b7ab-d8fc-44ad-805b-329b6931dde3 Address tcp://127.0.0.1:44941 Status: Status.closing
-2022-08-26 14:12:56,714 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-3739b908-85c5-4fde-a42e-7735a4183ae3 Address tcp://127.0.0.1:43097 Status: Status.closing
-2022-08-26 14:12:56,715 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-65212a27-8707-49b7-9640-a9f483ee1e30 Address tcp://127.0.0.1:42237 Status: Status.closing
-2022-08-26 14:12:56,715 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-34bfd2a4-a8b0-4368-8ad3-fb7c71a630d6 Address tcp://127.0.0.1:41447 Status: Status.closing
-2022-08-26 14:12:56,715 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-b9a76f28-0fc7-4c85-b96c-122dc0b4fddf Address tcp://127.0.0.1:36123 Status: Status.closing
-2022-08-26 14:12:56,715 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-d06499ec-9937-430c-bc93-ce6d8964bd5b Address tcp://127.0.0.1:35705 Status: Status.closing
-2022-08-26 14:12:56,715 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-1c8e7b35-a5f7-4136-9f04-257cb1ba35a5 Address tcp://127.0.0.1:46761 Status: Status.closing
-2022-08-26 14:12:56,720 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:38565', name: 9, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:56,720 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:38565
-2022-08-26 14:12:56,720 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:32895', name: 10, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:56,721 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:32895
-2022-08-26 14:12:56,721 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:40957', name: 11, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:56,721 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:40957
-2022-08-26 14:12:56,721 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:45727', name: 12, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:56,721 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:45727
-2022-08-26 14:12:56,721 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:40741', name: 13, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:56,721 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:40741
-2022-08-26 14:12:56,722 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:38115', name: 14, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:56,722 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:38115
-2022-08-26 14:12:56,722 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:42357', name: 15, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:56,722 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:42357
-2022-08-26 14:12:56,722 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:40389', name: 16, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:56,722 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:40389
-2022-08-26 14:12:56,722 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:33469', name: 17, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:56,722 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:33469
-2022-08-26 14:12:56,722 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:38265', name: 18, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:56,722 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:38265
-2022-08-26 14:12:56,723 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:44015', name: 19, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:56,723 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:44015
-2022-08-26 14:12:56,723 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:12:56,723 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-de1b901b-9e90-46fa-9293-a97fd9f97f22 Address tcp://127.0.0.1:38565 Status: Status.closing
-2022-08-26 14:12:56,723 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-d17d91a1-4800-4de5-9ff0-381697b6aed3 Address tcp://127.0.0.1:32895 Status: Status.closing
-2022-08-26 14:12:56,723 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-e827d881-ab67-447f-9cfa-a2aae257452a Address tcp://127.0.0.1:40957 Status: Status.closing
-2022-08-26 14:12:56,723 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-fbb64425-28e3-470d-beae-86eed3ec14ba Address tcp://127.0.0.1:45727 Status: Status.closing
-2022-08-26 14:12:56,724 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-92abf7ff-61a0-41e7-8f37-ec06c35ba7f2 Address tcp://127.0.0.1:40741 Status: Status.closing
-2022-08-26 14:12:56,724 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-b05a1064-b7b5-46da-9e56-7e64af77341d Address tcp://127.0.0.1:38115 Status: Status.closing
-2022-08-26 14:12:56,724 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-13806a84-fef1-483e-9e34-ddcf11f41c8b Address tcp://127.0.0.1:42357 Status: Status.closing
-2022-08-26 14:12:56,724 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-b531886e-1a29-4ded-9d44-c65a519edf8e Address tcp://127.0.0.1:40389 Status: Status.closing
-2022-08-26 14:12:56,724 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-ddb09157-c4ab-4865-99de-47039ccfe247 Address tcp://127.0.0.1:33469 Status: Status.closing
-2022-08-26 14:12:56,724 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-b55f80e7-6db3-4d03-9102-7a38dfdf3ec6 Address tcp://127.0.0.1:38265 Status: Status.closing
-2022-08-26 14:12:56,725 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-3718cc28-8f53-4886-b60a-145ec4fa535e Address tcp://127.0.0.1:44015 Status: Status.closing
-2022-08-26 14:12:56,735 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:12:56,736 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_custom_metrics 2022-08-26 14:12:56,974 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:56,976 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:56,976 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:35905
-2022-08-26 14:12:56,976 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:35305
-2022-08-26 14:12:56,980 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40699
-2022-08-26 14:12:56,981 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40699
-2022-08-26 14:12:56,981 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:12:56,981 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42637
-2022-08-26 14:12:56,981 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35905
-2022-08-26 14:12:56,981 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:56,981 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:56,981 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:56,981 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-x1d12aha
-2022-08-26 14:12:56,981 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:56,982 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42435
-2022-08-26 14:12:56,982 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42435
-2022-08-26 14:12:56,982 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:12:56,982 - distributed.worker - INFO -          dashboard at:            127.0.0.1:32831
-2022-08-26 14:12:56,982 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35905
-2022-08-26 14:12:56,982 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:56,982 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:12:56,982 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:56,982 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-oubd00c4
-2022-08-26 14:12:56,982 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:56,985 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40699', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:56,985 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40699
-2022-08-26 14:12:56,985 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:56,986 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42435', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:56,986 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42435
-2022-08-26 14:12:56,986 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:56,986 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35905
-2022-08-26 14:12:56,986 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:56,987 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35905
-2022-08-26 14:12:56,987 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:56,987 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:56,987 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:57,001 - distributed.scheduler - INFO - Receive client connection: Client-db62d32b-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:57,001 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:57,012 - distributed.scheduler - INFO - Remove client Client-db62d32b-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:57,012 - distributed.scheduler - INFO - Remove client Client-db62d32b-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:57,013 - distributed.scheduler - INFO - Close client connection: Client-db62d32b-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:57,013 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:40699
-2022-08-26 14:12:57,013 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42435
-2022-08-26 14:12:57,014 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:40699', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:57,014 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:40699
-2022-08-26 14:12:57,014 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:42435', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:57,014 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:42435
-2022-08-26 14:12:57,015 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:12:57,015 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-4b909983-8cf0-4ab8-a99d-1394c097963d Address tcp://127.0.0.1:40699 Status: Status.closing
-2022-08-26 14:12:57,015 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-b9048ea5-760b-4a32-9830-84771e3c01c3 Address tcp://127.0.0.1:42435 Status: Status.closing
-2022-08-26 14:12:57,016 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:12:57,016 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_register_worker_callbacks 2022-08-26 14:12:57,251 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:57,253 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:57,253 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:39671
-2022-08-26 14:12:57,253 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:38279
-2022-08-26 14:12:57,258 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45021
-2022-08-26 14:12:57,258 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45021
-2022-08-26 14:12:57,258 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:12:57,258 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40305
-2022-08-26 14:12:57,258 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39671
-2022-08-26 14:12:57,258 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:57,258 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:57,258 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:57,258 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-xdoxiip3
-2022-08-26 14:12:57,258 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:57,259 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35209
-2022-08-26 14:12:57,259 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35209
-2022-08-26 14:12:57,259 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:12:57,259 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43319
-2022-08-26 14:12:57,259 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39671
-2022-08-26 14:12:57,259 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:57,259 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:12:57,259 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:57,259 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-_eu3_vxe
-2022-08-26 14:12:57,259 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:57,262 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45021', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:57,263 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45021
-2022-08-26 14:12:57,263 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:57,263 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:35209', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:57,263 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:35209
-2022-08-26 14:12:57,263 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:57,264 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39671
-2022-08-26 14:12:57,264 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:57,264 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39671
-2022-08-26 14:12:57,264 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:57,264 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:57,264 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:57,278 - distributed.scheduler - INFO - Receive client connection: Client-db8d21d1-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:57,278 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:57,281 - distributed.worker - INFO - Run out-of-band function 'test_import'
-2022-08-26 14:12:57,282 - distributed.worker - INFO - Run out-of-band function 'test_import'
-2022-08-26 14:12:57,285 - distributed.worker - INFO - Run out-of-band function 'test_startup2'
-2022-08-26 14:12:57,285 - distributed.worker - INFO - Run out-of-band function 'test_startup2'
-2022-08-26 14:12:57,288 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33089
-2022-08-26 14:12:57,289 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33089
-2022-08-26 14:12:57,289 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35113
-2022-08-26 14:12:57,289 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39671
-2022-08-26 14:12:57,289 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:57,289 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:12:57,289 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:57,289 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-lydrdf_u
-2022-08-26 14:12:57,289 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:57,291 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33089', status: init, memory: 0, processing: 0>
-2022-08-26 14:12:57,291 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33089
-2022-08-26 14:12:57,291 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:57,291 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39671
-2022-08-26 14:12:57,291 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:57,292 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:57,293 - distributed.worker - INFO - Run out-of-band function 'test_import'
-2022-08-26 14:12:57,294 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:33089
-2022-08-26 14:12:57,295 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:33089', status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:57,295 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:33089
-2022-08-26 14:12:57,295 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-9eb9382a-a8c1-49dd-a260-5923795c3f04 Address tcp://127.0.0.1:33089 Status: Status.closing
-2022-08-26 14:12:57,298 - distributed.worker - INFO - Starting Worker plugin _WorkerSetupPlugin-797ba329-f0d5-424b-852d-806a22109a8d
-2022-08-26 14:12:57,298 - distributed.worker - INFO - Starting Worker plugin _WorkerSetupPlugin-797ba329-f0d5-424b-852d-806a22109a8d
-2022-08-26 14:12:57,301 - distributed.worker - INFO - Run out-of-band function 'test_import'
-2022-08-26 14:12:57,301 - distributed.worker - INFO - Run out-of-band function 'test_import'
-2022-08-26 14:12:57,305 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:44673
-2022-08-26 14:12:57,305 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:44673
-2022-08-26 14:12:57,305 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41809
-2022-08-26 14:12:57,305 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39671
-2022-08-26 14:12:57,305 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:57,305 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:12:57,305 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:57,305 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-x_e5898d
-2022-08-26 14:12:57,305 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:57,307 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:44673', status: init, memory: 0, processing: 0>
-2022-08-26 14:12:57,307 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:44673
-2022-08-26 14:12:57,307 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:57,308 - distributed.worker - INFO - Starting Worker plugin _WorkerSetupPlugin-797ba329-f0d5-424b-852d-806a22109a8d
-2022-08-26 14:12:57,308 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39671
-2022-08-26 14:12:57,308 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:57,308 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:57,310 - distributed.worker - INFO - Run out-of-band function 'test_import'
-2022-08-26 14:12:57,311 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:44673
-2022-08-26 14:12:57,311 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:44673', status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:57,311 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:44673
-2022-08-26 14:12:57,311 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-ee86614f-c99a-4ccf-81a6-fb271a2eb794 Address tcp://127.0.0.1:44673 Status: Status.closing
-2022-08-26 14:12:57,314 - distributed.worker - INFO - Starting Worker plugin _WorkerSetupPlugin-04b9237d-2c4b-4c90-bb92-778bacd55d6a
-2022-08-26 14:12:57,315 - distributed.worker - INFO - Starting Worker plugin _WorkerSetupPlugin-04b9237d-2c4b-4c90-bb92-778bacd55d6a
-2022-08-26 14:12:57,317 - distributed.worker - INFO - Run out-of-band function 'test_startup2'
-2022-08-26 14:12:57,318 - distributed.worker - INFO - Run out-of-band function 'test_startup2'
-2022-08-26 14:12:57,321 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:41895
-2022-08-26 14:12:57,321 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:41895
-2022-08-26 14:12:57,321 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41097
-2022-08-26 14:12:57,321 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39671
-2022-08-26 14:12:57,321 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:57,321 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:12:57,321 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:57,321 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-53rn49pb
-2022-08-26 14:12:57,322 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:57,323 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:41895', status: init, memory: 0, processing: 0>
-2022-08-26 14:12:57,323 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:41895
-2022-08-26 14:12:57,324 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:57,324 - distributed.worker - INFO - Starting Worker plugin _WorkerSetupPlugin-797ba329-f0d5-424b-852d-806a22109a8d
-2022-08-26 14:12:57,324 - distributed.worker - INFO - Starting Worker plugin _WorkerSetupPlugin-04b9237d-2c4b-4c90-bb92-778bacd55d6a
-2022-08-26 14:12:57,324 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39671
-2022-08-26 14:12:57,324 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:57,325 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:57,326 - distributed.worker - INFO - Run out-of-band function 'test_import'
-2022-08-26 14:12:57,328 - distributed.worker - INFO - Run out-of-band function 'test_startup2'
-2022-08-26 14:12:57,329 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:41895
-2022-08-26 14:12:57,330 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:41895', status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:57,330 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:41895
-2022-08-26 14:12:57,330 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-d5987895-3a27-414b-a2db-c83a198f64cd Address tcp://127.0.0.1:41895 Status: Status.closing
-2022-08-26 14:12:57,331 - distributed.scheduler - INFO - Remove client Client-db8d21d1-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:57,331 - distributed.scheduler - INFO - Remove client Client-db8d21d1-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:57,331 - distributed.scheduler - INFO - Close client connection: Client-db8d21d1-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:57,331 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:45021
-2022-08-26 14:12:57,332 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:35209
-2022-08-26 14:12:57,333 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:45021', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:57,333 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:45021
-2022-08-26 14:12:57,333 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:35209', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:57,333 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:35209
-2022-08-26 14:12:57,333 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:12:57,333 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-d45b081b-4214-4464-995f-930ee6715506 Address tcp://127.0.0.1:45021 Status: Status.closing
-2022-08-26 14:12:57,333 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-99a648c6-d180-4643-8921-c101f516e104 Address tcp://127.0.0.1:35209 Status: Status.closing
-2022-08-26 14:12:57,334 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:12:57,334 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_register_worker_callbacks_err 2022-08-26 14:12:57,564 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:57,566 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:57,566 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:40399
-2022-08-26 14:12:57,566 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:45095
-2022-08-26 14:12:57,571 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:46681
-2022-08-26 14:12:57,571 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:46681
-2022-08-26 14:12:57,571 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:12:57,571 - distributed.worker - INFO -          dashboard at:            127.0.0.1:37437
-2022-08-26 14:12:57,571 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:40399
-2022-08-26 14:12:57,571 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:57,571 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:12:57,571 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:57,571 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-zdaf6ji_
-2022-08-26 14:12:57,571 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:57,572 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:46143
-2022-08-26 14:12:57,572 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:46143
-2022-08-26 14:12:57,572 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:12:57,572 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35763
-2022-08-26 14:12:57,572 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:40399
-2022-08-26 14:12:57,572 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:57,572 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:12:57,572 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:57,572 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-qzqi_awx
-2022-08-26 14:12:57,572 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:57,575 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:46681', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:57,576 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:46681
-2022-08-26 14:12:57,576 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:57,576 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:46143', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:12:57,576 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:46143
-2022-08-26 14:12:57,576 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:57,577 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:40399
-2022-08-26 14:12:57,577 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:57,577 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:40399
-2022-08-26 14:12:57,577 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:57,577 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:57,577 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:57,591 - distributed.scheduler - INFO - Receive client connection: Client-dbbce599-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:57,591 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:57,594 - distributed.worker - INFO - Starting Worker plugin _WorkerSetupPlugin-b99ddbf7-8710-44db-8aef-7d5a62c77428
-2022-08-26 14:12:57,595 - distributed.worker - INFO - Starting Worker plugin _WorkerSetupPlugin-b99ddbf7-8710-44db-8aef-7d5a62c77428
-2022-08-26 14:12:57,603 - distributed.scheduler - INFO - Remove client Client-dbbce599-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:57,603 - distributed.scheduler - INFO - Remove client Client-dbbce599-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:57,603 - distributed.scheduler - INFO - Close client connection: Client-dbbce599-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:57,604 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:46681
-2022-08-26 14:12:57,604 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:46143
-2022-08-26 14:12:57,605 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:46681', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:57,605 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:46681
-2022-08-26 14:12:57,605 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:46143', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:57,605 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:46143
-2022-08-26 14:12:57,605 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:12:57,605 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-10d78ec4-8453-4db4-82a8-5487d4a6a4ea Address tcp://127.0.0.1:46681 Status: Status.closing
-2022-08-26 14:12:57,606 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-f495badb-f76f-49fd-bc88-4bd1379c09d4 Address tcp://127.0.0.1:46143 Status: Status.closing
-2022-08-26 14:12:57,606 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:12:57,607 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_local_directory 2022-08-26 14:12:57,836 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:57,838 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:57,838 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:45659
-2022-08-26 14:12:57,838 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:38727
-2022-08-26 14:12:57,841 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:41999
-2022-08-26 14:12:57,841 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:41999
-2022-08-26 14:12:57,841 - distributed.worker - INFO -          dashboard at:            127.0.0.1:37881
-2022-08-26 14:12:57,841 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:45659
-2022-08-26 14:12:57,841 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:57,841 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:12:57,841 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:57,841 - distributed.worker - INFO -       Local Directory: /tmp/tmpbk8lpqsg./dask-worker-space/worker-y_737vut
-2022-08-26 14:12:57,841 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:57,843 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:41999', status: init, memory: 0, processing: 0>
-2022-08-26 14:12:57,843 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:41999
-2022-08-26 14:12:57,843 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:57,844 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:45659
-2022-08-26 14:12:57,844 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:57,844 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:57,845 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:12:57,845 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:12:57,845 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:41999', status: running, memory: 0, processing: 0>
-2022-08-26 14:12:57,845 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:41999
-2022-08-26 14:12:57,845 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:12:57,845 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:41999
-2022-08-26 14:12:57,846 - distributed.diskutils - ERROR - Failed to remove '/tmp/tmpbk8lpqsg./dask-worker-space/worker-y_737vut' (failed in <built-in function lstat>): [Errno 2] No such file or directory: '/tmp/tmpbk8lpqsg./dask-worker-space/worker-y_737vut'
-2022-08-26 14:12:57,846 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-e900c1e5-e57f-4117-a0ac-1156f18363c3 Address tcp://127.0.0.1:41999 Status: Status.closing
-PASSED
-distributed/tests/test_worker.py::test_local_directory_make_new_directory 2022-08-26 14:12:58,075 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:58,077 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:58,077 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:34225
-2022-08-26 14:12:58,077 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:33361
-2022-08-26 14:12:58,080 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33835
-2022-08-26 14:12:58,080 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33835
-2022-08-26 14:12:58,080 - distributed.worker - INFO -          dashboard at:            127.0.0.1:38427
-2022-08-26 14:12:58,080 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34225
-2022-08-26 14:12:58,080 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:58,080 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:12:58,080 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:58,080 - distributed.worker - INFO -       Local Directory: /tmp/tmpm5_qxagp./foo/bar/dask-worker-space/worker-q3oajs89
-2022-08-26 14:12:58,080 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:58,082 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33835', status: init, memory: 0, processing: 0>
-2022-08-26 14:12:58,082 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33835
-2022-08-26 14:12:58,082 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:58,083 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34225
-2022-08-26 14:12:58,083 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:58,083 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:58,083 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:12:58,084 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:12:58,084 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:33835', status: running, memory: 0, processing: 0>
-2022-08-26 14:12:58,084 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:33835
-2022-08-26 14:12:58,084 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:12:58,084 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:33835
-2022-08-26 14:12:58,084 - distributed.diskutils - ERROR - Failed to remove '/tmp/tmpm5_qxagp./foo/bar/dask-worker-space/worker-q3oajs89' (failed in <built-in function lstat>): [Errno 2] No such file or directory: '/tmp/tmpm5_qxagp./foo/bar/dask-worker-space/worker-q3oajs89'
-2022-08-26 14:12:58,085 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-e9d509a4-4b41-4e96-80bb-93bf9e051f37 Address tcp://127.0.0.1:33835 Status: Status.closing
-PASSED
-distributed/tests/test_worker.py::test_host_address 2022-08-26 14:12:58,313 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:58,315 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:58,315 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:33755
-2022-08-26 14:12:58,315 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:39453
-2022-08-26 14:12:58,318 - distributed.scheduler - INFO - Receive client connection: Client-dc2bd729-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:58,318 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:58,321 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.2:46363
-2022-08-26 14:12:58,321 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.2:46363
-2022-08-26 14:12:58,322 - distributed.worker - INFO -          dashboard at:            127.0.0.2:34455
-2022-08-26 14:12:58,322 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33755
-2022-08-26 14:12:58,322 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:58,322 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:12:58,322 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:58,322 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-fbi60u7p
-2022-08-26 14:12:58,322 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:58,324 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.2:46363', status: init, memory: 0, processing: 0>
-2022-08-26 14:12:58,324 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.2:46363
-2022-08-26 14:12:58,324 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:58,324 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33755
-2022-08-26 14:12:58,324 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:58,325 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.2:46363
-2022-08-26 14:12:58,325 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:58,325 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-69260c03-2434-40f0-aaed-899bbd8cbef6 Address tcp://127.0.0.2:46363 Status: Status.closing
-2022-08-26 14:12:58,326 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.2:46363', status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:58,326 - distributed.core - INFO - Removing comms to tcp://127.0.0.2:46363
-2022-08-26 14:12:58,326 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:12:58,329 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.3:44759'
-2022-08-26 14:12:59,088 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.3:40755
-2022-08-26 14:12:59,088 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.3:40755
-2022-08-26 14:12:59,088 - distributed.worker - INFO -          dashboard at:            127.0.0.3:33351
-2022-08-26 14:12:59,088 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33755
-2022-08-26 14:12:59,088 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:59,088 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:12:59,088 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:59,088 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ibs2_ys8
-2022-08-26 14:12:59,088 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:59,398 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.3:40755', status: init, memory: 0, processing: 0>
-2022-08-26 14:12:59,399 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.3:40755
-2022-08-26 14:12:59,399 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:59,399 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33755
-2022-08-26 14:12:59,399 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:59,400 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:59,415 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.3:44759'.
-2022-08-26 14:12:59,415 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:12:59,415 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.3:40755
-2022-08-26 14:12:59,416 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-a6dfd5df-3637-4374-b8e7-369686511b79 Address tcp://127.0.0.3:40755 Status: Status.closing
-2022-08-26 14:12:59,416 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.3:40755', status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:59,416 - distributed.core - INFO - Removing comms to tcp://127.0.0.3:40755
-2022-08-26 14:12:59,416 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:12:59,544 - distributed.scheduler - INFO - Remove client Client-dc2bd729-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:59,545 - distributed.scheduler - INFO - Remove client Client-dc2bd729-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:59,545 - distributed.scheduler - INFO - Close client connection: Client-dc2bd729-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:59,545 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:12:59,545 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_interface_async[Worker] 2022-08-26 14:12:59,795 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:59,797 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:59,797 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:46775
-2022-08-26 14:12:59,797 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:46077
-2022-08-26 14:12:59,800 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37001
-2022-08-26 14:12:59,800 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37001
-2022-08-26 14:12:59,800 - distributed.worker - INFO -          dashboard at:            127.0.0.1:44495
-2022-08-26 14:12:59,800 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:46775
-2022-08-26 14:12:59,800 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:59,800 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:12:59,800 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:12:59,800 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-sla1_eng
-2022-08-26 14:12:59,800 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:59,802 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37001', status: init, memory: 0, processing: 0>
-2022-08-26 14:12:59,802 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37001
-2022-08-26 14:12:59,803 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:59,803 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:46775
-2022-08-26 14:12:59,803 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:12:59,803 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:59,806 - distributed.scheduler - INFO - Receive client connection: Client-dd0ed94f-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:59,806 - distributed.core - INFO - Starting established connection
-2022-08-26 14:12:59,818 - distributed.scheduler - INFO - Remove client Client-dd0ed94f-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:59,818 - distributed.scheduler - INFO - Remove client Client-dd0ed94f-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:59,818 - distributed.scheduler - INFO - Close client connection: Client-dd0ed94f-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:12:59,818 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:37001
-2022-08-26 14:12:59,819 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:37001', status: closing, memory: 0, processing: 0>
-2022-08-26 14:12:59,819 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:37001
-2022-08-26 14:12:59,819 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:12:59,819 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-316a2ca4-e1da-4860-87fe-26b7934383c6 Address tcp://127.0.0.1:37001 Status: Status.closing
-2022-08-26 14:12:59,820 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:12:59,820 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_interface_async[Nanny] 2022-08-26 14:12:59,848 - distributed.scheduler - INFO - State start
-2022-08-26 14:12:59,849 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:12:59,850 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:32965
-2022-08-26 14:12:59,850 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:46139
-2022-08-26 14:12:59,853 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:42335'
-2022-08-26 14:13:00,619 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37537
-2022-08-26 14:13:00,619 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37537
-2022-08-26 14:13:00,619 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46295
-2022-08-26 14:13:00,619 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:32965
-2022-08-26 14:13:00,619 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:00,619 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:13:00,620 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:00,620 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-w10a_n2d
-2022-08-26 14:13:00,620 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:00,925 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37537', status: init, memory: 0, processing: 0>
-2022-08-26 14:13:00,926 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37537
-2022-08-26 14:13:00,926 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:00,926 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:32965
-2022-08-26 14:13:00,926 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:00,927 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:00,945 - distributed.scheduler - INFO - Receive client connection: Client-ddbc8eb2-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:00,945 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:00,956 - distributed.scheduler - INFO - Remove client Client-ddbc8eb2-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:00,957 - distributed.scheduler - INFO - Remove client Client-ddbc8eb2-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:00,957 - distributed.scheduler - INFO - Close client connection: Client-ddbc8eb2-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:00,957 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:42335'.
-2022-08-26 14:13:00,957 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:13:00,958 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:37537
-2022-08-26 14:13:00,958 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-f730dd5d-bd69-4436-934d-62907c71acfc Address tcp://127.0.0.1:37537 Status: Status.closing
-2022-08-26 14:13:00,958 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:37537', status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:00,959 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:37537
-2022-08-26 14:13:00,959 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:13:01,084 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:13:01,084 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_protocol_from_scheduler_address[Worker] SKIPPED
-distributed/tests/test_worker.py::test_protocol_from_scheduler_address[Nanny] SKIPPED
-distributed/tests/test_worker.py::test_host_uses_scheduler_protocol 2022-08-26 14:13:01,119 - distributed.scheduler - INFO - State start
-2022-08-26 14:13:01,121 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:13:01,121 - distributed.scheduler - INFO -   Scheduler at: tcp://192.168.1.159:34427
-2022-08-26 14:13:01,121 - distributed.scheduler - INFO -   dashboard at:                    :32935
-2022-08-26 14:13:01,124 - distributed.worker - INFO -       Start worker at:  tcp://192.168.1.159:44725
-2022-08-26 14:13:01,124 - distributed.worker - INFO -          Listening to:  tcp://192.168.1.159:44725
-2022-08-26 14:13:01,124 - distributed.worker - INFO -          dashboard at:        192.168.1.159:45393
-2022-08-26 14:13:01,124 - distributed.worker - INFO - Waiting to connect to:  tcp://192.168.1.159:34427
-2022-08-26 14:13:01,124 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:01,124 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:13:01,124 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:01,124 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-n80rgekt
-2022-08-26 14:13:01,124 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:01,126 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://192.168.1.159:44725', status: init, memory: 0, processing: 0>
-2022-08-26 14:13:01,126 - distributed.scheduler - INFO - Starting worker compute stream, tcp://192.168.1.159:44725
-2022-08-26 14:13:01,126 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:01,126 - distributed.worker - INFO -         Registered to:  tcp://192.168.1.159:34427
-2022-08-26 14:13:01,126 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:01,127 - distributed.worker - INFO - Stopping worker at tcp://192.168.1.159:44725
-2022-08-26 14:13:01,127 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:01,127 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-fe5c55db-6609-4a35-ab4a-946c4e47a274 Address tcp://192.168.1.159:44725 Status: Status.closing
-2022-08-26 14:13:01,128 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://192.168.1.159:44725', status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:01,128 - distributed.core - INFO - Removing comms to tcp://192.168.1.159:44725
-2022-08-26 14:13:01,128 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:13:01,128 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:13:01,129 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_worker_listens_on_same_interface_by_default[Worker] 2022-08-26 14:13:01,156 - distributed.scheduler - INFO - State start
-2022-08-26 14:13:01,157 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:13:01,158 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:34549
-2022-08-26 14:13:01,158 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:33759
-2022-08-26 14:13:01,160 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45887
-2022-08-26 14:13:01,160 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45887
-2022-08-26 14:13:01,161 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45991
-2022-08-26 14:13:01,161 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34549
-2022-08-26 14:13:01,161 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:01,161 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:13:01,161 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:01,161 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ywzlgmfg
-2022-08-26 14:13:01,161 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:01,163 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45887', status: init, memory: 0, processing: 0>
-2022-08-26 14:13:01,163 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45887
-2022-08-26 14:13:01,163 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:01,163 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34549
-2022-08-26 14:13:01,163 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:01,163 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:45887
-2022-08-26 14:13:01,164 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:01,164 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-7cea808c-5b53-43d8-b699-a85ff89eec76 Address tcp://127.0.0.1:45887 Status: Status.closing
-2022-08-26 14:13:01,165 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:45887', status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:01,165 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:45887
-2022-08-26 14:13:01,165 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:13:01,165 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:13:01,165 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_worker_listens_on_same_interface_by_default[Nanny] 2022-08-26 14:13:01,191 - distributed.scheduler - INFO - State start
-2022-08-26 14:13:01,193 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:13:01,193 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:46031
-2022-08-26 14:13:01,193 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:37207
-2022-08-26 14:13:01,196 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:41533'
-2022-08-26 14:13:01,967 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36037
-2022-08-26 14:13:01,967 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36037
-2022-08-26 14:13:01,967 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39509
-2022-08-26 14:13:01,967 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:46031
-2022-08-26 14:13:01,967 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:01,967 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:13:01,967 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:01,967 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-rz33d_uf
-2022-08-26 14:13:01,967 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:02,275 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36037', status: init, memory: 0, processing: 0>
-2022-08-26 14:13:02,276 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36037
-2022-08-26 14:13:02,276 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:02,276 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:46031
-2022-08-26 14:13:02,276 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:02,277 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:02,281 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:41533'.
-2022-08-26 14:13:02,281 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:13:02,282 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36037
-2022-08-26 14:13:02,282 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-88ce61d7-97f5-46c5-b51b-35f7b165de11 Address tcp://127.0.0.1:36037 Status: Status.closing
-2022-08-26 14:13:02,283 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:36037', status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:02,283 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:36037
-2022-08-26 14:13:02,283 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:13:02,409 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:13:02,409 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_close_gracefully SKIPPED (nee...)
-distributed/tests/test_worker.py::test_close_while_executing[False] 2022-08-26 14:13:02,416 - distributed.scheduler - INFO - State start
-2022-08-26 14:13:02,417 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:13:02,417 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:34211
-2022-08-26 14:13:02,418 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:41617
-2022-08-26 14:13:02,420 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45223
-2022-08-26 14:13:02,420 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45223
-2022-08-26 14:13:02,420 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:13:02,420 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39785
-2022-08-26 14:13:02,420 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34211
-2022-08-26 14:13:02,421 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:02,421 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:13:02,421 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:02,421 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-y4suaf8_
-2022-08-26 14:13:02,421 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:02,422 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45223', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:02,423 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45223
-2022-08-26 14:13:02,423 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:02,423 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34211
-2022-08-26 14:13:02,423 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:02,423 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:02,437 - distributed.scheduler - INFO - Receive client connection: Client-dea04afa-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:02,437 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:02,450 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:45223
-2022-08-26 14:13:02,451 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-82fa9b97-8027-4c79-b5a5-0b4125f9d32f Address tcp://127.0.0.1:45223 Status: Status.closing
-2022-08-26 14:13:02,451 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:45223', name: 0, status: closing, memory: 0, processing: 1>
-2022-08-26 14:13:02,451 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:45223
-2022-08-26 14:13:02,452 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:13:02,459 - distributed.scheduler - INFO - Remove client Client-dea04afa-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:02,459 - distributed.scheduler - INFO - Remove client Client-dea04afa-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:02,459 - distributed.scheduler - INFO - Close client connection: Client-dea04afa-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:02,459 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:13:02,460 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_close_while_executing[True] SKIPPED
-distributed/tests/test_worker.py::test_close_async_task_handles_cancellation SKIPPED
-distributed/tests/test_worker.py::test_lifetime SKIPPED (need --runs...)
-distributed/tests/test_worker.py::test_lifetime_stagger 2022-08-26 14:13:02,695 - distributed.scheduler - INFO - State start
-2022-08-26 14:13:02,697 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:13:02,697 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:43395
-2022-08-26 14:13:02,697 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:34049
-2022-08-26 14:13:02,701 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:41523
-2022-08-26 14:13:02,701 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:41523
-2022-08-26 14:13:02,701 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:13:02,701 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36611
-2022-08-26 14:13:02,702 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43395
-2022-08-26 14:13:02,702 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:02,702 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:13:02,702 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:02,702 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-cllylego
-2022-08-26 14:13:02,702 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:02,702 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43151
-2022-08-26 14:13:02,702 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43151
-2022-08-26 14:13:02,702 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:13:02,702 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41161
-2022-08-26 14:13:02,702 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43395
-2022-08-26 14:13:02,702 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:02,703 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:13:02,703 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:02,703 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-5kuljt02
-2022-08-26 14:13:02,703 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:02,705 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:41523', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:02,706 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:41523
-2022-08-26 14:13:02,706 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:02,706 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:43151', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:02,706 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:43151
-2022-08-26 14:13:02,706 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:02,707 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43395
-2022-08-26 14:13:02,707 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:02,707 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43395
-2022-08-26 14:13:02,707 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:02,707 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:02,707 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:02,718 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:41523
-2022-08-26 14:13:02,719 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:43151
-2022-08-26 14:13:02,720 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:41523', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:02,720 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:41523
-2022-08-26 14:13:02,720 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:43151', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:02,720 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:43151
-2022-08-26 14:13:02,720 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:13:02,720 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-a17d24b5-6979-40b9-8d94-d7e024daff91 Address tcp://127.0.0.1:41523 Status: Status.closing
-2022-08-26 14:13:02,720 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-56ab40e3-91e0-4f31-88c6-53fdfd17a5d2 Address tcp://127.0.0.1:43151 Status: Status.closing
-2022-08-26 14:13:02,721 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:13:02,721 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_bad_metrics 2022-08-26 14:13:02,951 - distributed.scheduler - INFO - State start
-2022-08-26 14:13:02,952 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:13:02,953 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:43099
-2022-08-26 14:13:02,953 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:40553
-2022-08-26 14:13:02,955 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45111
-2022-08-26 14:13:02,955 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45111
-2022-08-26 14:13:02,955 - distributed.worker - INFO -          dashboard at:            127.0.0.1:44921
-2022-08-26 14:13:02,956 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43099
-2022-08-26 14:13:02,956 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:02,956 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:13:02,956 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:02,956 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-nuu47ao_
-2022-08-26 14:13:02,956 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:02,958 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45111', status: init, memory: 0, processing: 0>
-2022-08-26 14:13:02,958 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45111
-2022-08-26 14:13:02,958 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:02,958 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43099
-2022-08-26 14:13:02,958 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:02,958 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:45111
-2022-08-26 14:13:02,959 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:02,959 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-983c416f-3926-493b-a7d4-6beb4d4df31f Address tcp://127.0.0.1:45111 Status: Status.closing
-2022-08-26 14:13:02,959 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:45111', status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:02,960 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:45111
-2022-08-26 14:13:02,960 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:13:02,960 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:13:02,960 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_bad_startup 2022-08-26 14:13:03,189 - distributed.scheduler - INFO - State start
-2022-08-26 14:13:03,191 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:13:03,191 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:36219
-2022-08-26 14:13:03,191 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:39781
-2022-08-26 14:13:03,194 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42119
-2022-08-26 14:13:03,194 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42119
-2022-08-26 14:13:03,194 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33877
-2022-08-26 14:13:03,194 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36219
-2022-08-26 14:13:03,194 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:03,194 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:13:03,194 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:03,194 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-c17n6sy_
-2022-08-26 14:13:03,194 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:03,196 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42119', status: init, memory: 0, processing: 0>
-2022-08-26 14:13:03,196 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42119
-2022-08-26 14:13:03,196 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:03,196 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36219
-2022-08-26 14:13:03,197 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:03,197 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:03,197 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:13:03,197 - distributed.scheduler - INFO - Scheduler closing all comms
-2022-08-26 14:13:03,198 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:42119', status: running, memory: 0, processing: 0>
-2022-08-26 14:13:03,198 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:42119
-2022-08-26 14:13:03,198 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:13:03,198 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42119
-2022-08-26 14:13:03,198 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-a9cbda05-bc80-488d-9da2-bfad80f6c426 Address tcp://127.0.0.1:42119 Status: Status.closing
-PASSED
-distributed/tests/test_worker.py::test_pip_install 2022-08-26 14:13:03,427 - distributed.scheduler - INFO - State start
-2022-08-26 14:13:03,429 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:13:03,429 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:35199
-2022-08-26 14:13:03,429 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:36935
-2022-08-26 14:13:03,433 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40555
-2022-08-26 14:13:03,433 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40555
-2022-08-26 14:13:03,433 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:13:03,434 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35659
-2022-08-26 14:13:03,434 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35199
-2022-08-26 14:13:03,434 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:03,434 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:13:03,434 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:03,434 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-red9ig57
-2022-08-26 14:13:03,434 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:03,434 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38055
-2022-08-26 14:13:03,434 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38055
-2022-08-26 14:13:03,434 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:13:03,434 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34391
-2022-08-26 14:13:03,434 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35199
-2022-08-26 14:13:03,435 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:03,435 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:13:03,435 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:03,435 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-0pdu0402
-2022-08-26 14:13:03,435 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:03,437 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40555', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:03,438 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40555
-2022-08-26 14:13:03,438 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:03,438 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38055', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:03,438 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38055
-2022-08-26 14:13:03,438 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:03,439 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35199
-2022-08-26 14:13:03,439 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:03,439 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35199
-2022-08-26 14:13:03,439 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:03,439 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:03,439 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:03,453 - distributed.scheduler - INFO - Receive client connection: Client-df3b5be1-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:03,453 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:03,457 - distributed.worker - INFO - Starting Worker plugin pip
-2022-08-26 14:13:03,457 - distributed.worker - INFO - Starting Worker plugin pip
-2022-08-26 14:13:03,459 - distributed.diagnostics.plugin - INFO - Pip installing the following packages: ['requests']
-2022-08-26 14:13:03,460 - distributed.diagnostics.plugin - INFO - Pip installing the following packages: ['requests']
-2022-08-26 14:13:03,465 - distributed.scheduler - INFO - Remove client Client-df3b5be1-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:03,465 - distributed.scheduler - INFO - Remove client Client-df3b5be1-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:03,465 - distributed.scheduler - INFO - Close client connection: Client-df3b5be1-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:03,466 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:40555
-2022-08-26 14:13:03,466 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:38055
-2022-08-26 14:13:03,467 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:40555', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:03,467 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:40555
-2022-08-26 14:13:03,467 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:38055', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:03,467 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:38055
-2022-08-26 14:13:03,467 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:13:03,467 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-b0302742-d156-4608-9776-0ad37ba6af38 Address tcp://127.0.0.1:40555 Status: Status.closing
-2022-08-26 14:13:03,468 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-1ab7a944-366c-4150-aed2-9ad73c0eacac Address tcp://127.0.0.1:38055 Status: Status.closing
-2022-08-26 14:13:03,468 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:13:03,469 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_pip_install_fails 2022-08-26 14:13:03,698 - distributed.scheduler - INFO - State start
-2022-08-26 14:13:03,700 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:13:03,700 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:38747
-2022-08-26 14:13:03,700 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:40453
-2022-08-26 14:13:03,704 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33797
-2022-08-26 14:13:03,705 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33797
-2022-08-26 14:13:03,705 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:13:03,705 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43101
-2022-08-26 14:13:03,705 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:38747
-2022-08-26 14:13:03,705 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:03,705 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:13:03,705 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:03,705 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-4lm4_2qn
-2022-08-26 14:13:03,705 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:03,705 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40301
-2022-08-26 14:13:03,705 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40301
-2022-08-26 14:13:03,706 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:13:03,706 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42549
-2022-08-26 14:13:03,706 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:38747
-2022-08-26 14:13:03,706 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:03,706 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:13:03,706 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:03,706 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-itdpdfa5
-2022-08-26 14:13:03,706 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:03,709 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33797', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:03,709 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33797
-2022-08-26 14:13:03,709 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:03,709 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40301', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:03,710 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40301
-2022-08-26 14:13:03,710 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:03,710 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:38747
-2022-08-26 14:13:03,710 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:03,710 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:38747
-2022-08-26 14:13:03,710 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:03,711 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:03,711 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:03,724 - distributed.scheduler - INFO - Receive client connection: Client-df64c330-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:03,724 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:03,728 - distributed.worker - INFO - Starting Worker plugin pip
-2022-08-26 14:13:03,729 - distributed.worker - INFO - Starting Worker plugin pip
-2022-08-26 14:13:03,731 - distributed.diagnostics.plugin - ERROR - Pip install failed with 'Could not find a version that satisfies the requirement not-a-package'
-2022-08-26 14:13:03,732 - distributed.diagnostics.plugin - ERROR - Pip install failed with 'Could not find a version that satisfies the requirement not-a-package'
-2022-08-26 14:13:03,736 - distributed.scheduler - INFO - Remove client Client-df64c330-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:03,736 - distributed.scheduler - INFO - Remove client Client-df64c330-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:03,737 - distributed.scheduler - INFO - Close client connection: Client-df64c330-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:03,737 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:33797
-2022-08-26 14:13:03,737 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:40301
-2022-08-26 14:13:03,738 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:33797', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:03,738 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:33797
-2022-08-26 14:13:03,739 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:40301', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:03,739 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:40301
-2022-08-26 14:13:03,739 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:13:03,739 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-44efbfb7-4f06-4569-8a90-5369ed9126de Address tcp://127.0.0.1:33797 Status: Status.closing
-2022-08-26 14:13:03,739 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-73b9de61-1de4-425e-98d5-d8a14223ac0f Address tcp://127.0.0.1:40301 Status: Status.closing
-2022-08-26 14:13:03,740 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:13:03,740 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_update_latency 2022-08-26 14:13:03,970 - distributed.scheduler - INFO - State start
-2022-08-26 14:13:03,972 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:13:03,972 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:35591
-2022-08-26 14:13:03,972 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:40767
-2022-08-26 14:13:03,975 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36829
-2022-08-26 14:13:03,975 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36829
-2022-08-26 14:13:03,975 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36787
-2022-08-26 14:13:03,975 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35591
-2022-08-26 14:13:03,975 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:03,975 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:13:03,975 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:03,975 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-560d7d72
-2022-08-26 14:13:03,975 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:03,977 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36829', status: init, memory: 0, processing: 0>
-2022-08-26 14:13:03,977 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36829
-2022-08-26 14:13:03,978 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:03,978 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35591
-2022-08-26 14:13:03,978 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:03,978 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:03,980 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36829
-2022-08-26 14:13:03,980 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:36829', status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:03,981 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:36829
-2022-08-26 14:13:03,981 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:13:03,981 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-62d159b6-a481-4082-b85b-512d32c6ba5d Address tcp://127.0.0.1:36829 Status: Status.closing
-2022-08-26 14:13:03,982 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:13:03,982 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_workerstate_executing SKIPPED
-distributed/tests/test_worker.py::test_shutdown_on_scheduler_comm_closed 2022-08-26 14:13:04,212 - distributed.scheduler - INFO - State start
-2022-08-26 14:13:04,213 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:13:04,213 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:40097
-2022-08-26 14:13:04,213 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:45997
-2022-08-26 14:13:04,216 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35339
-2022-08-26 14:13:04,216 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35339
-2022-08-26 14:13:04,216 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:13:04,216 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36461
-2022-08-26 14:13:04,216 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:40097
-2022-08-26 14:13:04,216 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:04,216 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:13:04,217 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:04,217 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-2p9vr_sl
-2022-08-26 14:13:04,217 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:04,219 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:35339', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:04,219 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:35339
-2022-08-26 14:13:04,219 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:04,219 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:40097
-2022-08-26 14:13:04,219 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:04,219 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:04,230 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:35339', name: 0, status: running, memory: 0, processing: 0>
-2022-08-26 14:13:04,230 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:35339
-2022-08-26 14:13:04,230 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:13:04,231 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-272728d3-31d3-4ffe-bf3f-7292a0ae3b06 Address tcp://127.0.0.1:35339 Status: Status.running
-2022-08-26 14:13:04,231 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:35339
-2022-08-26 14:13:04,232 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:13:04,232 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_heartbeat_comm_closed 2022-08-26 14:13:04,460 - distributed.scheduler - INFO - State start
-2022-08-26 14:13:04,462 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:13:04,462 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:46109
-2022-08-26 14:13:04,462 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:33283
-2022-08-26 14:13:04,467 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:35709', status: init, memory: 0, processing: 0>
-2022-08-26 14:13:04,467 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:35709
-2022-08-26 14:13:04,467 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:04,467 - distributed.worker - WARNING - Heartbeat to scheduler failed
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 1158, in heartbeat
-    response = await retry_operation(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/utils_comm.py", line 383, in retry_operation
-    return await retry(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/utils_comm.py", line 368, in retry
-    return await coro()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/tests/test_worker.py", line 1726, in bad_heartbeat_worker
-    raise CommClosedError()
-distributed.comm.core.CommClosedError
-2022-08-26 14:13:04,468 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:04,468 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:35709', status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:04,469 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:35709
-2022-08-26 14:13:04,469 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:13:04,469 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:13:04,469 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_heartbeat_missing 2022-08-26 14:13:04,699 - distributed.scheduler - INFO - State start
-2022-08-26 14:13:04,700 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:13:04,700 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:33891
-2022-08-26 14:13:04,700 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:34477
-2022-08-26 14:13:04,703 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40207
-2022-08-26 14:13:04,703 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40207
-2022-08-26 14:13:04,703 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:13:04,703 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46081
-2022-08-26 14:13:04,703 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33891
-2022-08-26 14:13:04,703 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:04,703 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:13:04,704 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:04,704 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-plyl9bmd
-2022-08-26 14:13:04,704 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:04,705 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40207', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:04,706 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40207
-2022-08-26 14:13:04,706 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:04,706 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33891
-2022-08-26 14:13:04,706 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:04,706 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:04,717 - distributed.worker - ERROR - Scheduler was unaware of this worker 'tcp://127.0.0.1:40207'. Shutting down.
-2022-08-26 14:13:04,718 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:40207', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:04,718 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:40207
-2022-08-26 14:13:04,718 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:13:04,719 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:13:04,719 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_heartbeat_missing_real_cluster 2022-08-26 14:13:04,949 - distributed.scheduler - INFO - State start
-2022-08-26 14:13:04,950 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:13:04,950 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:37885
-2022-08-26 14:13:04,951 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:46669
-2022-08-26 14:13:04,953 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33615
-2022-08-26 14:13:04,953 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33615
-2022-08-26 14:13:04,954 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:13:04,954 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41157
-2022-08-26 14:13:04,954 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37885
-2022-08-26 14:13:04,954 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:04,954 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:13:04,954 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:04,954 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-z7u3pkhp
-2022-08-26 14:13:04,954 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:04,956 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33615', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:04,956 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33615
-2022-08-26 14:13:04,956 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:04,956 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37885
-2022-08-26 14:13:04,956 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:04,957 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:04,967 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:33615
-2022-08-26 14:13:04,969 - distributed.scheduler - WARNING - Received heartbeat from unregistered worker 'tcp://127.0.0.1:33615'.
-2022-08-26 14:13:04,969 - distributed.worker - ERROR - Scheduler was unaware of this worker 'tcp://127.0.0.1:33615'. Shutting down.
-2022-08-26 14:13:04,971 - distributed.batched - INFO - Batched Comm Closed <TCP (closed) Scheduler connection to worker local=tcp://127.0.0.1:37885 remote=tcp://127.0.0.1:52062>
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/batched.py", line 115, in _background_send
-    nbytes = yield coro
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/tornado/gen.py", line 769, in run
-    value = future.result()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/utils_test.py", line 1817, in write
-    return await self.comm.write(msg, serializers=serializers, on_error=on_error)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 269, in write
-    raise CommClosedError()
-distributed.comm.core.CommClosedError
-2022-08-26 14:13:04,971 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:13:04,971 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_heartbeat_missing_restarts 2022-08-26 14:13:05,200 - distributed.scheduler - INFO - State start
-2022-08-26 14:13:05,201 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:13:05,202 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:39401
-2022-08-26 14:13:05,202 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:45811
-2022-08-26 14:13:05,205 - distributed.nanny - INFO -         Start Nanny at: 'tcp://127.0.0.1:43123'
-2022-08-26 14:13:05,970 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35271
-2022-08-26 14:13:05,970 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35271
-2022-08-26 14:13:05,970 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:13:05,970 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39509
-2022-08-26 14:13:05,970 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39401
-2022-08-26 14:13:05,970 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:05,970 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:13:05,970 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:05,970 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-vp8dea4w
-2022-08-26 14:13:05,970 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:06,278 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:35271', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:06,278 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:35271
-2022-08-26 14:13:06,278 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:06,279 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39401
-2022-08-26 14:13:06,279 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:06,279 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:06,295 - distributed.scheduler - INFO - Receive client connection: Client-e0ed1534-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:06,296 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:06,782 - distributed.worker - ERROR - Scheduler was unaware of this worker 'tcp://127.0.0.1:35271'. Shutting down.
-2022-08-26 14:13:06,782 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:35271
-2022-08-26 14:13:06,783 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-e96a43da-f605-428c-8e18-2a81a9114e5c Address tcp://127.0.0.1:35271 Status: Status.closing
-2022-08-26 14:13:06,783 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:35271', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:06,783 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:35271
-2022-08-26 14:13:06,783 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:13:06,784 - distributed.nanny - INFO - Worker closed
-2022-08-26 14:13:06,784 - distributed.nanny - ERROR - Worker process died unexpectedly
-2022-08-26 14:13:06,909 - distributed.nanny - WARNING - Restarting worker
-2022-08-26 14:13:07,680 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40981
-2022-08-26 14:13:07,680 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40981
-2022-08-26 14:13:07,680 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:13:07,680 - distributed.worker - INFO -          dashboard at:            127.0.0.1:44917
-2022-08-26 14:13:07,680 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39401
-2022-08-26 14:13:07,680 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:07,680 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:13:07,680 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:07,680 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-vih0xgnz
-2022-08-26 14:13:07,680 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:07,985 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40981', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:07,985 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40981
-2022-08-26 14:13:07,985 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:07,986 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39401
-2022-08-26 14:13:07,986 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:07,986 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:08,024 - distributed.scheduler - INFO - Remove client Client-e0ed1534-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:08,024 - distributed.scheduler - INFO - Remove client Client-e0ed1534-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:08,024 - distributed.scheduler - INFO - Close client connection: Client-e0ed1534-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:08,025 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:43123'.
-2022-08-26 14:13:08,025 - distributed.nanny - INFO - Nanny asking worker to close
-2022-08-26 14:13:08,025 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:40981
-2022-08-26 14:13:08,026 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-d2fa6bc9-eb22-46cb-b941-a0a76bca44a9 Address tcp://127.0.0.1:40981 Status: Status.closing
-2022-08-26 14:13:08,026 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:40981', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:08,026 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:40981
-2022-08-26 14:13:08,026 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:13:08,151 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:13:08,151 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_bad_local_directory 2022-08-26 14:13:08,381 - distributed.scheduler - INFO - State start
-2022-08-26 14:13:08,383 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:13:08,383 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:41623
-2022-08-26 14:13:08,383 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:45097
-2022-08-26 14:13:08,384 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:13:08,384 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_taskstate_metadata 2022-08-26 14:13:08,613 - distributed.scheduler - INFO - State start
-2022-08-26 14:13:08,614 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:13:08,614 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:43585
-2022-08-26 14:13:08,615 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:45163
-2022-08-26 14:13:08,618 - distributed.scheduler - INFO - Receive client connection: Client-e24f6d67-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:08,618 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:08,621 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:41091
-2022-08-26 14:13:08,621 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:41091
-2022-08-26 14:13:08,621 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42493
-2022-08-26 14:13:08,621 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43585
-2022-08-26 14:13:08,621 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:08,621 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:13:08,621 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:08,621 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-cb4orjmo
-2022-08-26 14:13:08,621 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:08,623 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:41091', status: init, memory: 0, processing: 0>
-2022-08-26 14:13:08,624 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:41091
-2022-08-26 14:13:08,624 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:08,624 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43585
-2022-08-26 14:13:08,624 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:08,624 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:08,626 - distributed.worker - INFO - Starting Worker plugin TaskStateMetadataPlugin-931fb3ff-4214-4e1f-ab29-caba66ce94e9
-2022-08-26 14:13:08,633 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:41091
-2022-08-26 14:13:08,634 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:41091', status: closing, memory: 1, processing: 0>
-2022-08-26 14:13:08,634 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:41091
-2022-08-26 14:13:08,634 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:13:08,634 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-bc5f9bb7-a2a5-4419-b674-e18d31e1a5dc Address tcp://127.0.0.1:41091 Status: Status.closing
-2022-08-26 14:13:08,640 - distributed.scheduler - INFO - Remove client Client-e24f6d67-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:08,640 - distributed.scheduler - INFO - Remove client Client-e24f6d67-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:08,640 - distributed.scheduler - INFO - Close client connection: Client-e24f6d67-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:08,641 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:13:08,641 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_executor_offload 2022-08-26 14:13:08,871 - distributed.scheduler - INFO - State start
-2022-08-26 14:13:08,872 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:13:08,873 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:36681
-2022-08-26 14:13:08,873 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:38221
-2022-08-26 14:13:08,876 - distributed.scheduler - INFO - Receive client connection: Client-e276d337-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:08,876 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:08,879 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38105
-2022-08-26 14:13:08,879 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38105
-2022-08-26 14:13:08,879 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42959
-2022-08-26 14:13:08,879 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36681
-2022-08-26 14:13:08,879 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:08,880 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:13:08,880 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:08,880 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-sfdofn7w
-2022-08-26 14:13:08,880 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:08,882 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38105', status: init, memory: 0, processing: 0>
-2022-08-26 14:13:08,882 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38105
-2022-08-26 14:13:08,882 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:08,882 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36681
-2022-08-26 14:13:08,882 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:08,884 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:08,892 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:38105
-2022-08-26 14:13:08,892 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:38105', status: closing, memory: 1, processing: 0>
-2022-08-26 14:13:08,893 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:38105
-2022-08-26 14:13:08,893 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:13:08,893 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-bdfc8447-cccd-4fe8-a59e-c403465c2fbb Address tcp://127.0.0.1:38105 Status: Status.closing
-2022-08-26 14:13:08,898 - distributed.scheduler - INFO - Remove client Client-e276d337-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:08,898 - distributed.scheduler - INFO - Remove client Client-e276d337-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:08,899 - distributed.scheduler - INFO - Close client connection: Client-e276d337-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:08,899 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:13:08,899 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_story 2022-08-26 14:13:09,129 - distributed.scheduler - INFO - State start
-2022-08-26 14:13:09,131 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:13:09,131 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:33621
-2022-08-26 14:13:09,131 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:36573
-2022-08-26 14:13:09,134 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:44881
-2022-08-26 14:13:09,134 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:44881
-2022-08-26 14:13:09,134 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:13:09,134 - distributed.worker - INFO -          dashboard at:            127.0.0.1:38823
-2022-08-26 14:13:09,134 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33621
-2022-08-26 14:13:09,134 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:09,134 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:13:09,134 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:09,135 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-08a517e7
-2022-08-26 14:13:09,135 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:09,136 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:44881', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:09,137 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:44881
-2022-08-26 14:13:09,137 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:09,137 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33621
-2022-08-26 14:13:09,137 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:09,137 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:09,151 - distributed.scheduler - INFO - Receive client connection: Client-e2a0bf41-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:09,151 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:09,172 - distributed.scheduler - INFO - Remove client Client-e2a0bf41-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:09,172 - distributed.scheduler - INFO - Remove client Client-e2a0bf41-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:09,173 - distributed.scheduler - INFO - Close client connection: Client-e2a0bf41-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:09,174 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:44881
-2022-08-26 14:13:09,174 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-f2123b83-ede9-4742-b033-d2c0933efc71 Address tcp://127.0.0.1:44881 Status: Status.closing
-2022-08-26 14:13:09,175 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:44881', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:09,175 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:44881
-2022-08-26 14:13:09,175 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:13:09,175 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:13:09,175 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_stimulus_story 2022-08-26 14:13:09,406 - distributed.scheduler - INFO - State start
-2022-08-26 14:13:09,407 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:13:09,408 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:40577
-2022-08-26 14:13:09,408 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:41603
-2022-08-26 14:13:09,410 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33837
-2022-08-26 14:13:09,410 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33837
-2022-08-26 14:13:09,410 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:13:09,411 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45515
-2022-08-26 14:13:09,411 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:40577
-2022-08-26 14:13:09,411 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:09,411 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:13:09,411 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:09,411 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-8pah7kt0
-2022-08-26 14:13:09,411 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:09,413 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33837', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:09,413 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33837
-2022-08-26 14:13:09,413 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:09,413 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:40577
-2022-08-26 14:13:09,413 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:09,414 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:09,427 - distributed.scheduler - INFO - Receive client connection: Client-e2caf0e9-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:09,427 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:09,446 - distributed.worker - WARNING - Compute Failed
-Key:       f2
-Function:  inc
-args:      ('foo')
-kwargs:    {}
-Exception: 'TypeError(\'can only concatenate str (not "int") to str\')'
-
-2022-08-26 14:13:09,449 - distributed.scheduler - INFO - Remove client Client-e2caf0e9-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:09,449 - distributed.scheduler - INFO - Remove client Client-e2caf0e9-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:09,450 - distributed.scheduler - INFO - Close client connection: Client-e2caf0e9-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:09,450 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:33837
-2022-08-26 14:13:09,451 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:33837', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:09,451 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:33837
-2022-08-26 14:13:09,451 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:13:09,451 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-cb6d3266-7197-4190-870b-8e44f555b3d4 Address tcp://127.0.0.1:33837 Status: Status.closing
-2022-08-26 14:13:09,452 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:13:09,452 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_worker_descopes_data 2022-08-26 14:13:09,683 - distributed.scheduler - INFO - State start
-2022-08-26 14:13:09,684 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:13:09,685 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:44217
-2022-08-26 14:13:09,685 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:44907
-2022-08-26 14:13:09,687 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:44909
-2022-08-26 14:13:09,687 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:44909
-2022-08-26 14:13:09,688 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:13:09,688 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42439
-2022-08-26 14:13:09,688 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44217
-2022-08-26 14:13:09,688 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:09,688 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:13:09,688 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:09,688 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-_py8mrjt
-2022-08-26 14:13:09,688 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:09,690 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:44909', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:09,690 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:44909
-2022-08-26 14:13:09,690 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:09,691 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44217
-2022-08-26 14:13:09,691 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:09,691 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:09,704 - distributed.scheduler - INFO - Receive client connection: Client-e2f53cd7-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:09,705 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:09,724 - distributed.worker - WARNING - Compute Failed
-Key:       f2
-Function:  f
-args:      (<test_worker.test_worker_descopes_data.<locals>.C object at 0x7f15c40086d0>)
-kwargs:    {}
-Exception: 'Exception(<test_worker.test_worker_descopes_data.<locals>.C object at 0x7f15c40086d0>, <test_worker.test_worker_descopes_data.<locals>.C object at 0x5640427e82c0>)'
-
-2022-08-26 14:13:09,963 - distributed.scheduler - INFO - Remove client Client-e2f53cd7-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:09,964 - distributed.scheduler - INFO - Remove client Client-e2f53cd7-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:09,964 - distributed.scheduler - INFO - Close client connection: Client-e2f53cd7-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:09,964 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:44909
-2022-08-26 14:13:09,965 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:44909', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:09,965 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:44909
-2022-08-26 14:13:09,965 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:13:09,965 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-4feb67f1-023a-467e-bacd-7669a5209f2e Address tcp://127.0.0.1:44909 Status: Status.closing
-2022-08-26 14:13:09,966 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:13:09,966 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_gather_dep_one_worker_always_busy SKIPPED
-distributed/tests/test_worker.py::test_gather_dep_local_workers_first 2022-08-26 14:13:10,197 - distributed.scheduler - INFO - State start
-2022-08-26 14:13:10,199 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:13:10,199 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:33065
-2022-08-26 14:13:10,199 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:36389
-2022-08-26 14:13:10,222 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43377
-2022-08-26 14:13:10,222 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43377
-2022-08-26 14:13:10,222 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:13:10,222 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36021
-2022-08-26 14:13:10,222 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33065
-2022-08-26 14:13:10,222 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:10,222 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:13:10,222 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:10,222 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-yaakwgsr
-2022-08-26 14:13:10,223 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:10,223 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45815
-2022-08-26 14:13:10,223 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45815
-2022-08-26 14:13:10,223 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:13:10,223 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33199
-2022-08-26 14:13:10,223 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33065
-2022-08-26 14:13:10,223 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:10,223 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:13:10,223 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:10,224 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ukomy5k7
-2022-08-26 14:13:10,224 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:10,225 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.2:39743
-2022-08-26 14:13:10,225 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.2:39743
-2022-08-26 14:13:10,225 - distributed.worker - INFO -           Worker name:                          2
-2022-08-26 14:13:10,225 - distributed.worker - INFO -          dashboard at:            127.0.0.2:44505
-2022-08-26 14:13:10,225 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33065
-2022-08-26 14:13:10,225 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:10,225 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:13:10,225 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:10,225 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-rf1_yli2
-2022-08-26 14:13:10,225 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:10,226 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.2:42289
-2022-08-26 14:13:10,226 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.2:42289
-2022-08-26 14:13:10,226 - distributed.worker - INFO -           Worker name:                          3
-2022-08-26 14:13:10,226 - distributed.worker - INFO -          dashboard at:            127.0.0.2:33289
-2022-08-26 14:13:10,226 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33065
-2022-08-26 14:13:10,226 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:10,226 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:13:10,226 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:10,226 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-y1akvxb8
-2022-08-26 14:13:10,226 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:10,227 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.2:37047
-2022-08-26 14:13:10,227 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.2:37047
-2022-08-26 14:13:10,227 - distributed.worker - INFO -           Worker name:                          4
-2022-08-26 14:13:10,227 - distributed.worker - INFO -          dashboard at:            127.0.0.2:45349
-2022-08-26 14:13:10,227 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33065
-2022-08-26 14:13:10,227 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:10,227 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:13:10,227 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:10,227 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-tgd2gl1y
-2022-08-26 14:13:10,228 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:10,228 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.2:42649
-2022-08-26 14:13:10,228 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.2:42649
-2022-08-26 14:13:10,228 - distributed.worker - INFO -           Worker name:                          5
-2022-08-26 14:13:10,228 - distributed.worker - INFO -          dashboard at:            127.0.0.2:39053
-2022-08-26 14:13:10,228 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33065
-2022-08-26 14:13:10,228 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:10,228 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:13:10,228 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:10,228 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-i6j4lexw
-2022-08-26 14:13:10,229 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:10,229 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.2:33895
-2022-08-26 14:13:10,229 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.2:33895
-2022-08-26 14:13:10,229 - distributed.worker - INFO -           Worker name:                          6
-2022-08-26 14:13:10,229 - distributed.worker - INFO -          dashboard at:            127.0.0.2:39045
-2022-08-26 14:13:10,229 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33065
-2022-08-26 14:13:10,229 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:10,229 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:13:10,229 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:10,230 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-lscfkayr
-2022-08-26 14:13:10,230 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:10,230 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.2:46667
-2022-08-26 14:13:10,230 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.2:46667
-2022-08-26 14:13:10,230 - distributed.worker - INFO -           Worker name:                          7
-2022-08-26 14:13:10,230 - distributed.worker - INFO -          dashboard at:            127.0.0.2:45137
-2022-08-26 14:13:10,230 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33065
-2022-08-26 14:13:10,230 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:10,230 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:13:10,231 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:10,231 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-x74f02dj
-2022-08-26 14:13:10,231 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:10,231 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.2:38237
-2022-08-26 14:13:10,231 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.2:38237
-2022-08-26 14:13:10,231 - distributed.worker - INFO -           Worker name:                          8
-2022-08-26 14:13:10,231 - distributed.worker - INFO -          dashboard at:            127.0.0.2:35507
-2022-08-26 14:13:10,231 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33065
-2022-08-26 14:13:10,231 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:10,231 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:13:10,232 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:10,232 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ubl2gkaw
-2022-08-26 14:13:10,232 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:10,232 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.2:37405
-2022-08-26 14:13:10,232 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.2:37405
-2022-08-26 14:13:10,232 - distributed.worker - INFO -           Worker name:                          9
-2022-08-26 14:13:10,232 - distributed.worker - INFO -          dashboard at:            127.0.0.2:41373
-2022-08-26 14:13:10,232 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33065
-2022-08-26 14:13:10,232 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:10,232 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:13:10,233 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:10,233 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-qvwxsmip
-2022-08-26 14:13:10,233 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:10,233 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.2:42343
-2022-08-26 14:13:10,233 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.2:42343
-2022-08-26 14:13:10,233 - distributed.worker - INFO -           Worker name:                         10
-2022-08-26 14:13:10,233 - distributed.worker - INFO -          dashboard at:            127.0.0.2:37883
-2022-08-26 14:13:10,233 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33065
-2022-08-26 14:13:10,233 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:10,234 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:13:10,234 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:10,234 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-kyjj0ta2
-2022-08-26 14:13:10,234 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:10,234 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.2:43823
-2022-08-26 14:13:10,234 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.2:43823
-2022-08-26 14:13:10,234 - distributed.worker - INFO -           Worker name:                         11
-2022-08-26 14:13:10,234 - distributed.worker - INFO -          dashboard at:            127.0.0.2:32947
-2022-08-26 14:13:10,234 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33065
-2022-08-26 14:13:10,234 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:10,235 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:13:10,235 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:10,235 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-wz4jm5gx
-2022-08-26 14:13:10,235 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:10,247 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:43377', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:10,247 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:43377
-2022-08-26 14:13:10,247 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:10,248 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45815', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:10,248 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45815
-2022-08-26 14:13:10,248 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:10,248 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.2:39743', name: 2, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:10,248 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.2:39743
-2022-08-26 14:13:10,249 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:10,249 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.2:42289', name: 3, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:10,249 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.2:42289
-2022-08-26 14:13:10,249 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:10,249 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.2:37047', name: 4, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:10,250 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.2:37047
-2022-08-26 14:13:10,250 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:10,250 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.2:42649', name: 5, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:10,250 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.2:42649
-2022-08-26 14:13:10,250 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:10,251 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.2:33895', name: 6, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:10,251 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.2:33895
-2022-08-26 14:13:10,251 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:10,251 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.2:46667', name: 7, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:10,252 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.2:46667
-2022-08-26 14:13:10,252 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:10,252 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.2:38237', name: 8, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:10,252 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.2:38237
-2022-08-26 14:13:10,252 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:10,253 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.2:37405', name: 9, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:10,253 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.2:37405
-2022-08-26 14:13:10,253 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:10,253 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.2:42343', name: 10, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:10,254 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.2:42343
-2022-08-26 14:13:10,254 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:10,254 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.2:43823', name: 11, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:10,254 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.2:43823
-2022-08-26 14:13:10,254 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:10,255 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33065
-2022-08-26 14:13:10,255 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:10,255 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33065
-2022-08-26 14:13:10,255 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:10,256 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33065
-2022-08-26 14:13:10,256 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:10,256 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33065
-2022-08-26 14:13:10,256 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:10,256 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33065
-2022-08-26 14:13:10,256 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:10,256 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33065
-2022-08-26 14:13:10,257 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:10,257 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33065
-2022-08-26 14:13:10,257 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:10,257 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33065
-2022-08-26 14:13:10,257 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:10,257 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33065
-2022-08-26 14:13:10,257 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:10,258 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33065
-2022-08-26 14:13:10,258 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:10,258 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33065
-2022-08-26 14:13:10,258 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:10,258 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33065
-2022-08-26 14:13:10,258 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:10,259 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:10,259 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:10,259 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:10,259 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:10,259 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:10,259 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:10,259 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:10,259 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:10,259 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:10,259 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:10,259 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:10,259 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:10,275 - distributed.scheduler - INFO - Receive client connection: Client-e34c592a-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:10,276 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:10,317 - distributed.scheduler - INFO - Remove client Client-e34c592a-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:10,317 - distributed.scheduler - INFO - Remove client Client-e34c592a-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:10,318 - distributed.scheduler - INFO - Close client connection: Client-e34c592a-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:10,323 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:43377
-2022-08-26 14:13:10,324 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:45815
-2022-08-26 14:13:10,324 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.2:39743
-2022-08-26 14:13:10,324 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.2:42289
-2022-08-26 14:13:10,325 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.2:37047
-2022-08-26 14:13:10,325 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.2:42649
-2022-08-26 14:13:10,325 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.2:33895
-2022-08-26 14:13:10,325 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.2:46667
-2022-08-26 14:13:10,326 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.2:38237
-2022-08-26 14:13:10,326 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.2:37405
-2022-08-26 14:13:10,326 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.2:42343
-2022-08-26 14:13:10,327 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.2:43823
-2022-08-26 14:13:10,330 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:43377', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:10,330 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:43377
-2022-08-26 14:13:10,330 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:45815', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:10,330 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:45815
-2022-08-26 14:13:10,330 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.2:39743', name: 2, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:10,331 - distributed.core - INFO - Removing comms to tcp://127.0.0.2:39743
-2022-08-26 14:13:10,331 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.2:42289', name: 3, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:10,331 - distributed.core - INFO - Removing comms to tcp://127.0.0.2:42289
-2022-08-26 14:13:10,331 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.2:37047', name: 4, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:10,331 - distributed.core - INFO - Removing comms to tcp://127.0.0.2:37047
-2022-08-26 14:13:10,331 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.2:42649', name: 5, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:10,331 - distributed.core - INFO - Removing comms to tcp://127.0.0.2:42649
-2022-08-26 14:13:10,331 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.2:33895', name: 6, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:10,331 - distributed.core - INFO - Removing comms to tcp://127.0.0.2:33895
-2022-08-26 14:13:10,332 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.2:46667', name: 7, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:10,332 - distributed.core - INFO - Removing comms to tcp://127.0.0.2:46667
-2022-08-26 14:13:10,332 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.2:38237', name: 8, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:10,332 - distributed.core - INFO - Removing comms to tcp://127.0.0.2:38237
-2022-08-26 14:13:10,332 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.2:37405', name: 9, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:10,332 - distributed.core - INFO - Removing comms to tcp://127.0.0.2:37405
-2022-08-26 14:13:10,332 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.2:42343', name: 10, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:10,332 - distributed.core - INFO - Removing comms to tcp://127.0.0.2:42343
-2022-08-26 14:13:10,332 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.2:43823', name: 11, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:10,333 - distributed.core - INFO - Removing comms to tcp://127.0.0.2:43823
-2022-08-26 14:13:10,333 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:13:10,333 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-9b77e31c-46f6-4520-9d58-2b5de0dbaf81 Address tcp://127.0.0.1:43377 Status: Status.closing
-2022-08-26 14:13:10,333 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-f21bd197-9173-42ef-a265-b04949adbb6a Address tcp://127.0.0.1:45815 Status: Status.closing
-2022-08-26 14:13:10,333 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-23281c85-9956-426b-b7b6-3e4d7bdff6f5 Address tcp://127.0.0.2:39743 Status: Status.closing
-2022-08-26 14:13:10,333 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-97241b99-9016-407e-b137-2faa7e81909b Address tcp://127.0.0.2:42289 Status: Status.closing
-2022-08-26 14:13:10,334 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-c748097b-2b0d-4870-8009-013f0c736037 Address tcp://127.0.0.2:37047 Status: Status.closing
-2022-08-26 14:13:10,334 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-9792664b-f604-4597-aae8-7c60096f97ed Address tcp://127.0.0.2:42649 Status: Status.closing
-2022-08-26 14:13:10,334 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-cb652608-8d62-4173-8962-33297fe5ccb3 Address tcp://127.0.0.2:33895 Status: Status.closing
-2022-08-26 14:13:10,334 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-e1ae21c9-44f5-4a09-83a5-ccce0e7e83c0 Address tcp://127.0.0.2:46667 Status: Status.closing
-2022-08-26 14:13:10,334 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-14ec804a-71b1-4c0d-a142-7e53471938bc Address tcp://127.0.0.2:38237 Status: Status.closing
-2022-08-26 14:13:10,335 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-520b7699-0b80-44a7-80bc-f58c80a96c63 Address tcp://127.0.0.2:37405 Status: Status.closing
-2022-08-26 14:13:10,335 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-e4590b46-d9a8-4748-84ae-eb0f46c0148b Address tcp://127.0.0.2:42343 Status: Status.closing
-2022-08-26 14:13:10,335 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-fe79b3fd-e9d7-4c7b-9723-05d4ab1c73ab Address tcp://127.0.0.2:43823 Status: Status.closing
-2022-08-26 14:13:10,341 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:13:10,341 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_gather_dep_from_remote_workers_if_all_local_workers_are_busy 2022-08-26 14:13:10,577 - distributed.scheduler - INFO - State start
-2022-08-26 14:13:10,579 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:13:10,579 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:36367
-2022-08-26 14:13:10,579 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:34055
-2022-08-26 14:13:10,599 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.2:40677
-2022-08-26 14:13:10,599 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.2:40677
-2022-08-26 14:13:10,600 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:13:10,600 - distributed.worker - INFO -          dashboard at:            127.0.0.2:33971
-2022-08-26 14:13:10,600 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36367
-2022-08-26 14:13:10,600 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:10,600 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:13:10,600 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:10,600 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ealcps3e
-2022-08-26 14:13:10,600 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:10,601 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:46703
-2022-08-26 14:13:10,601 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:46703
-2022-08-26 14:13:10,601 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:13:10,601 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46691
-2022-08-26 14:13:10,601 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36367
-2022-08-26 14:13:10,601 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:10,601 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:13:10,601 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:10,601 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-63a8ryb3
-2022-08-26 14:13:10,601 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:10,602 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43509
-2022-08-26 14:13:10,602 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43509
-2022-08-26 14:13:10,602 - distributed.worker - INFO -           Worker name:                          2
-2022-08-26 14:13:10,602 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33781
-2022-08-26 14:13:10,602 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36367
-2022-08-26 14:13:10,602 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:10,602 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:13:10,602 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:10,602 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-baq7shmb
-2022-08-26 14:13:10,602 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:10,603 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:44737
-2022-08-26 14:13:10,603 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:44737
-2022-08-26 14:13:10,603 - distributed.worker - INFO -           Worker name:                          3
-2022-08-26 14:13:10,603 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40937
-2022-08-26 14:13:10,603 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36367
-2022-08-26 14:13:10,603 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:10,603 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:13:10,603 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:10,603 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-qp5lxd6b
-2022-08-26 14:13:10,604 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:10,604 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40157
-2022-08-26 14:13:10,604 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40157
-2022-08-26 14:13:10,604 - distributed.worker - INFO -           Worker name:                          4
-2022-08-26 14:13:10,604 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39281
-2022-08-26 14:13:10,604 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36367
-2022-08-26 14:13:10,604 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:10,604 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:13:10,605 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:10,605 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-nsgxrg6k
-2022-08-26 14:13:10,605 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:10,605 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38817
-2022-08-26 14:13:10,605 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38817
-2022-08-26 14:13:10,605 - distributed.worker - INFO -           Worker name:                          5
-2022-08-26 14:13:10,605 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40641
-2022-08-26 14:13:10,605 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36367
-2022-08-26 14:13:10,606 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:10,606 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:13:10,606 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:10,606 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-yzmbg187
-2022-08-26 14:13:10,606 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:10,607 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33185
-2022-08-26 14:13:10,607 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33185
-2022-08-26 14:13:10,607 - distributed.worker - INFO -           Worker name:                          6
-2022-08-26 14:13:10,607 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46203
-2022-08-26 14:13:10,607 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36367
-2022-08-26 14:13:10,607 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:10,607 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:13:10,607 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:10,608 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-8fklcs2k
-2022-08-26 14:13:10,608 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:10,608 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40391
-2022-08-26 14:13:10,608 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40391
-2022-08-26 14:13:10,608 - distributed.worker - INFO -           Worker name:                          7
-2022-08-26 14:13:10,608 - distributed.worker - INFO -          dashboard at:            127.0.0.1:44471
-2022-08-26 14:13:10,608 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36367
-2022-08-26 14:13:10,608 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:10,609 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:13:10,609 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:10,609 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-sfj8zjh5
-2022-08-26 14:13:10,609 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:10,609 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36077
-2022-08-26 14:13:10,609 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36077
-2022-08-26 14:13:10,609 - distributed.worker - INFO -           Worker name:                          8
-2022-08-26 14:13:10,610 - distributed.worker - INFO -          dashboard at:            127.0.0.1:37073
-2022-08-26 14:13:10,610 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36367
-2022-08-26 14:13:10,610 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:10,610 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:13:10,610 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:10,610 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-btzi7ugv
-2022-08-26 14:13:10,610 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:10,610 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38139
-2022-08-26 14:13:10,610 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38139
-2022-08-26 14:13:10,611 - distributed.worker - INFO -           Worker name:                          9
-2022-08-26 14:13:10,611 - distributed.worker - INFO -          dashboard at:            127.0.0.1:44919
-2022-08-26 14:13:10,611 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36367
-2022-08-26 14:13:10,611 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:10,611 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:13:10,611 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:10,611 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-p26wd7vd
-2022-08-26 14:13:10,611 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:10,612 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35609
-2022-08-26 14:13:10,612 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35609
-2022-08-26 14:13:10,612 - distributed.worker - INFO -           Worker name:                         10
-2022-08-26 14:13:10,612 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40685
-2022-08-26 14:13:10,612 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36367
-2022-08-26 14:13:10,612 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:10,612 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:13:10,612 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:10,612 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-vz8idixm
-2022-08-26 14:13:10,612 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:10,624 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.2:40677', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:10,624 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.2:40677
-2022-08-26 14:13:10,624 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:10,624 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:46703', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:10,625 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:46703
-2022-08-26 14:13:10,625 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:10,625 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:43509', name: 2, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:10,625 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:43509
-2022-08-26 14:13:10,625 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:10,626 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:44737', name: 3, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:10,626 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:44737
-2022-08-26 14:13:10,626 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:10,626 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40157', name: 4, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:10,626 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40157
-2022-08-26 14:13:10,627 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:10,627 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38817', name: 5, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:10,627 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38817
-2022-08-26 14:13:10,627 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:10,627 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33185', name: 6, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:10,628 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33185
-2022-08-26 14:13:10,628 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:10,628 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40391', name: 7, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:10,628 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40391
-2022-08-26 14:13:10,628 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:10,629 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36077', name: 8, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:10,629 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36077
-2022-08-26 14:13:10,629 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:10,629 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38139', name: 9, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:10,630 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38139
-2022-08-26 14:13:10,630 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:10,630 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:35609', name: 10, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:10,630 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:35609
-2022-08-26 14:13:10,630 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:10,631 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36367
-2022-08-26 14:13:10,631 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:10,631 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36367
-2022-08-26 14:13:10,631 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:10,632 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36367
-2022-08-26 14:13:10,632 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:10,632 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36367
-2022-08-26 14:13:10,632 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:10,632 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36367
-2022-08-26 14:13:10,632 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:10,633 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36367
-2022-08-26 14:13:10,633 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:10,633 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36367
-2022-08-26 14:13:10,633 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:10,633 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36367
-2022-08-26 14:13:10,633 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:10,633 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36367
-2022-08-26 14:13:10,634 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:10,634 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36367
-2022-08-26 14:13:10,634 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:10,634 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36367
-2022-08-26 14:13:10,634 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:10,635 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:10,635 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:10,635 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:10,635 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:10,635 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:10,635 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:10,635 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:10,635 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:10,635 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:10,635 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:10,635 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:10,651 - distributed.scheduler - INFO - Receive client connection: Client-e385a7ba-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:10,651 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:10,712 - distributed.scheduler - INFO - Remove client Client-e385a7ba-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:10,712 - distributed.scheduler - INFO - Remove client Client-e385a7ba-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:10,712 - distributed.scheduler - INFO - Close client connection: Client-e385a7ba-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:10,713 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.2:40677
-2022-08-26 14:13:10,713 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:46703
-2022-08-26 14:13:10,713 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:43509
-2022-08-26 14:13:10,713 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:44737
-2022-08-26 14:13:10,714 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:40157
-2022-08-26 14:13:10,714 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:38817
-2022-08-26 14:13:10,714 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:33185
-2022-08-26 14:13:10,715 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:40391
-2022-08-26 14:13:10,715 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36077
-2022-08-26 14:13:10,715 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:38139
-2022-08-26 14:13:10,716 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:35609
-2022-08-26 14:13:10,719 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.2:40677', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:10,719 - distributed.core - INFO - Removing comms to tcp://127.0.0.2:40677
-2022-08-26 14:13:10,719 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:43509', name: 2, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:10,719 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:43509
-2022-08-26 14:13:10,719 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:44737', name: 3, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:10,719 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:44737
-2022-08-26 14:13:10,720 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:40157', name: 4, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:10,720 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:40157
-2022-08-26 14:13:10,720 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:38817', name: 5, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:10,720 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:38817
-2022-08-26 14:13:10,720 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:33185', name: 6, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:10,720 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:33185
-2022-08-26 14:13:10,720 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:40391', name: 7, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:10,720 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:40391
-2022-08-26 14:13:10,720 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:36077', name: 8, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:10,720 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:36077
-2022-08-26 14:13:10,721 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:38139', name: 9, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:10,721 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:38139
-2022-08-26 14:13:10,721 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:35609', name: 10, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:10,721 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:35609
-2022-08-26 14:13:10,721 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-1e0d7e50-13f5-49e1-9515-529830df6546 Address tcp://127.0.0.2:40677 Status: Status.closing
-2022-08-26 14:13:10,721 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-0c19f634-0fb5-4de2-a67c-4ce4154797ba Address tcp://127.0.0.1:43509 Status: Status.closing
-2022-08-26 14:13:10,722 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-b10e5f67-cf7a-407b-80af-ab82e131e5c6 Address tcp://127.0.0.1:44737 Status: Status.closing
-2022-08-26 14:13:10,722 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-ba59c707-47fe-48db-ba56-58aa611b1d04 Address tcp://127.0.0.1:40157 Status: Status.closing
-2022-08-26 14:13:10,722 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-518026fc-3c99-4aaf-a8f4-2ec86bdc7a26 Address tcp://127.0.0.1:38817 Status: Status.closing
-2022-08-26 14:13:10,722 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-8bd2ecd1-1a47-4e13-9975-292a6cbc024e Address tcp://127.0.0.1:33185 Status: Status.closing
-2022-08-26 14:13:10,722 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-353ec452-0bbc-445f-854b-0746cbace30b Address tcp://127.0.0.1:40391 Status: Status.closing
-2022-08-26 14:13:10,723 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-f9b9551f-2202-48cd-a4d6-445437121704 Address tcp://127.0.0.1:36077 Status: Status.closing
-2022-08-26 14:13:10,723 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-b66d628e-2858-41b5-9f1f-dba0ab972b18 Address tcp://127.0.0.1:38139 Status: Status.closing
-2022-08-26 14:13:10,723 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-d1a636fc-d511-424d-ad5c-46956f28cbba Address tcp://127.0.0.1:35609 Status: Status.closing
-2022-08-26 14:13:10,726 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:46703', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:10,726 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:46703
-2022-08-26 14:13:10,726 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:13:10,726 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-2e409328-2a3a-467e-ba64-a0c1f794e9f2 Address tcp://127.0.0.1:46703 Status: Status.closing
-2022-08-26 14:13:10,729 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:13:10,729 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_worker_client_uses_default_no_close 2022-08-26 14:13:10,967 - distributed.scheduler - INFO - State start
-2022-08-26 14:13:10,969 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:13:10,969 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:37039
-2022-08-26 14:13:10,969 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:43247
-2022-08-26 14:13:10,972 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:41847
-2022-08-26 14:13:10,972 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:41847
-2022-08-26 14:13:10,972 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:13:10,972 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35333
-2022-08-26 14:13:10,972 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37039
-2022-08-26 14:13:10,972 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:10,972 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:13:10,972 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:10,972 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ktnjm52h
-2022-08-26 14:13:10,972 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:10,974 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:41847', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:10,974 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:41847
-2022-08-26 14:13:10,974 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:10,975 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37039
-2022-08-26 14:13:10,975 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:10,975 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:10,988 - distributed.scheduler - INFO - Receive client connection: Client-e3b92bb6-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:10,989 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:11,004 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:41847
-2022-08-26 14:13:11,005 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:41847', name: 0, status: closing, memory: 1, processing: 0>
-2022-08-26 14:13:11,005 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:41847
-2022-08-26 14:13:11,005 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:13:11,005 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-d56151d8-0f76-43f5-995d-0559f78ecc22 Address tcp://127.0.0.1:41847 Status: Status.closing
-2022-08-26 14:13:11,010 - distributed.scheduler - INFO - Remove client Client-e3b92bb6-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:11,011 - distributed.scheduler - INFO - Remove client Client-e3b92bb6-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:11,011 - distributed.scheduler - INFO - Close client connection: Client-e3b92bb6-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:11,011 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:13:11,011 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_worker_client_closes_if_created_on_worker_one_worker 2022-08-26 14:13:11,246 - distributed.scheduler - INFO - State start
-2022-08-26 14:13:11,247 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:13:11,248 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:33719
-2022-08-26 14:13:11,248 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:45175
-2022-08-26 14:13:11,250 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42071
-2022-08-26 14:13:11,250 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42071
-2022-08-26 14:13:11,250 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:13:11,250 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33215
-2022-08-26 14:13:11,251 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33719
-2022-08-26 14:13:11,251 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:11,251 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:13:11,251 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:11,251 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-3zpxgln8
-2022-08-26 14:13:11,251 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:11,253 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42071', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:11,253 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42071
-2022-08-26 14:13:11,253 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:11,253 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33719
-2022-08-26 14:13:11,253 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:11,253 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:11,267 - distributed.scheduler - INFO - Receive client connection: Client-e3e3a804-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:11,267 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:11,284 - distributed.scheduler - INFO - Receive client connection: Client-worker-e3e64c72-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:11,284 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:11,290 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42071
-2022-08-26 14:13:11,295 - distributed.scheduler - INFO - Remove client Client-worker-e3e64c72-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:11,296 - distributed.scheduler - INFO - Remove client Client-worker-e3e64c72-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:11,296 - distributed.scheduler - INFO - Close client connection: Client-worker-e3e64c72-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:11,297 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:42071', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:11,297 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:42071
-2022-08-26 14:13:11,297 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:13:11,297 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-560fad38-cd21-425d-b9f7-e182964d0594 Address tcp://127.0.0.1:42071 Status: Status.closing
-2022-08-26 14:13:11,301 - distributed.scheduler - INFO - Remove client Client-e3e3a804-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:11,301 - distributed.scheduler - INFO - Remove client Client-e3e3a804-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:11,301 - distributed.scheduler - INFO - Close client connection: Client-e3e3a804-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:11,302 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:13:11,302 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_worker_client_closes_if_created_on_worker_last_worker_alive 2022-08-26 14:13:11,534 - distributed.scheduler - INFO - State start
-2022-08-26 14:13:11,535 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:13:11,536 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:32863
-2022-08-26 14:13:11,536 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:40929
-2022-08-26 14:13:11,540 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33453
-2022-08-26 14:13:11,540 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33453
-2022-08-26 14:13:11,540 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:13:11,540 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45153
-2022-08-26 14:13:11,540 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:32863
-2022-08-26 14:13:11,540 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:11,540 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:13:11,540 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:11,541 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-1t55kqyg
-2022-08-26 14:13:11,541 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:11,541 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38661
-2022-08-26 14:13:11,541 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38661
-2022-08-26 14:13:11,541 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:13:11,541 - distributed.worker - INFO -          dashboard at:            127.0.0.1:38269
-2022-08-26 14:13:11,541 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:32863
-2022-08-26 14:13:11,541 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:11,541 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:13:11,541 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:11,541 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-7tiyphta
-2022-08-26 14:13:11,542 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:11,544 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33453', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:11,545 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33453
-2022-08-26 14:13:11,545 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:11,545 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38661', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:11,545 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38661
-2022-08-26 14:13:11,545 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:11,545 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:32863
-2022-08-26 14:13:11,546 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:11,546 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:32863
-2022-08-26 14:13:11,546 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:11,546 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:11,546 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:11,560 - distributed.scheduler - INFO - Receive client connection: Client-e4105d80-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:11,560 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:11,577 - distributed.scheduler - INFO - Receive client connection: Client-worker-e412ecf2-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:11,577 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:11,583 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:33453
-2022-08-26 14:13:11,584 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:33453', name: 0, status: closing, memory: 1, processing: 0>
-2022-08-26 14:13:11,584 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:33453
-2022-08-26 14:13:11,584 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-05f92eaa-6bb7-496a-83d0-b258c056582f Address tcp://127.0.0.1:33453 Status: Status.closing
-2022-08-26 14:13:11,598 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:38661
-2022-08-26 14:13:11,599 - distributed.scheduler - INFO - Remove client Client-worker-e412ecf2-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:11,599 - distributed.scheduler - INFO - Remove client Client-worker-e412ecf2-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:11,599 - distributed.scheduler - INFO - Close client connection: Client-worker-e412ecf2-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:11,600 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-897d5684-c2f6-45f9-a4af-d028b5e57c41 Address tcp://127.0.0.1:38661 Status: Status.closing
-2022-08-26 14:13:11,600 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:38661', name: 1, status: closing, memory: 1, processing: 0>
-2022-08-26 14:13:11,600 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:38661
-2022-08-26 14:13:11,600 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:13:11,604 - distributed.scheduler - INFO - Remove client Client-e4105d80-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:11,605 - distributed.scheduler - INFO - Remove client Client-e4105d80-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:11,605 - distributed.scheduler - INFO - Close client connection: Client-e4105d80-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:11,605 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:13:11,605 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_multiple_executors 2022-08-26 14:13:11,838 - distributed.scheduler - INFO - State start
-2022-08-26 14:13:11,840 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:13:11,840 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:40085
-2022-08-26 14:13:11,840 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:40693
-2022-08-26 14:13:11,843 - distributed.scheduler - INFO - Receive client connection: Client-e43b8eda-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:11,843 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:11,846 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:44079
-2022-08-26 14:13:11,846 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:44079
-2022-08-26 14:13:11,846 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34293
-2022-08-26 14:13:11,846 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:40085
-2022-08-26 14:13:11,846 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:11,846 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:13:11,847 - distributed.worker - INFO -                Memory:                  10.47 GiB
-2022-08-26 14:13:11,847 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-zjius7hx
-2022-08-26 14:13:11,847 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:11,848 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:44079', status: init, memory: 0, processing: 0>
-2022-08-26 14:13:11,849 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:44079
-2022-08-26 14:13:11,849 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:11,849 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:40085
-2022-08-26 14:13:11,849 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:11,851 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:11,860 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:44079
-2022-08-26 14:13:11,860 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:44079', status: closing, memory: 2, processing: 0>
-2022-08-26 14:13:11,860 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:44079
-2022-08-26 14:13:11,861 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:13:11,861 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-dd1e00bb-c341-4ad8-b1a3-903f8a571b20 Address tcp://127.0.0.1:44079 Status: Status.closing
-2022-08-26 14:13:11,866 - distributed.scheduler - INFO - Remove client Client-e43b8eda-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:11,866 - distributed.scheduler - INFO - Remove client Client-e43b8eda-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:11,866 - distributed.scheduler - INFO - Close client connection: Client-e43b8eda-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:11,866 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:13:11,867 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_bad_executor_annotation 2022-08-26 14:13:12,098 - distributed.scheduler - INFO - State start
-2022-08-26 14:13:12,100 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:13:12,100 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:41855
-2022-08-26 14:13:12,100 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:36217
-2022-08-26 14:13:12,105 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42805
-2022-08-26 14:13:12,105 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42805
-2022-08-26 14:13:12,105 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:13:12,105 - distributed.worker - INFO -          dashboard at:            127.0.0.1:38319
-2022-08-26 14:13:12,105 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:41855
-2022-08-26 14:13:12,105 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:12,105 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:13:12,105 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:12,105 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-bga7rti_
-2022-08-26 14:13:12,105 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:12,106 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:46631
-2022-08-26 14:13:12,106 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:46631
-2022-08-26 14:13:12,106 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:13:12,106 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39427
-2022-08-26 14:13:12,106 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:41855
-2022-08-26 14:13:12,106 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:12,106 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:13:12,106 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:12,106 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-rqp60vi1
-2022-08-26 14:13:12,106 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:12,109 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42805', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:12,109 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42805
-2022-08-26 14:13:12,109 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:12,110 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:46631', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:12,110 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:46631
-2022-08-26 14:13:12,110 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:12,110 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:41855
-2022-08-26 14:13:12,110 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:12,111 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:41855
-2022-08-26 14:13:12,111 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:12,111 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:12,111 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:12,124 - distributed.scheduler - INFO - Receive client connection: Client-e4668b30-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:12,125 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:12,136 - distributed.worker - ERROR - Exception during execution of task inc-03d935909bba38f9a49655e867cbf56a.
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 2181, in execute
-    e = self.executors[executor]
-KeyError: 'bad'
-
-During handling of the above exception, another exception occurred:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 2183, in execute
-    raise ValueError(
-ValueError: Invalid executor 'bad'; expected one of: ['actor', 'default', 'offload']
-2022-08-26 14:13:12,147 - distributed.scheduler - INFO - Remove client Client-e4668b30-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:12,147 - distributed.scheduler - INFO - Remove client Client-e4668b30-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:12,147 - distributed.scheduler - INFO - Close client connection: Client-e4668b30-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:12,148 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42805
-2022-08-26 14:13:12,148 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:46631
-2022-08-26 14:13:12,149 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:42805', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:12,149 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:42805
-2022-08-26 14:13:12,149 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:46631', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:12,149 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:46631
-2022-08-26 14:13:12,149 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:13:12,149 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-dc84967c-0def-49c9-828d-7ee004fa996f Address tcp://127.0.0.1:42805 Status: Status.closing
-2022-08-26 14:13:12,150 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-b07937ec-d575-48f5-b4f4-7ff2ecd3de81 Address tcp://127.0.0.1:46631 Status: Status.closing
-2022-08-26 14:13:12,150 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:13:12,151 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_process_executor 2022-08-26 14:13:12,382 - distributed.scheduler - INFO - State start
-2022-08-26 14:13:12,384 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:13:12,384 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:39341
-2022-08-26 14:13:12,384 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:33709
-2022-08-26 14:13:12,389 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37421
-2022-08-26 14:13:12,389 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37421
-2022-08-26 14:13:12,389 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:13:12,389 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45817
-2022-08-26 14:13:12,389 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39341
-2022-08-26 14:13:12,389 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:12,389 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:13:12,389 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:12,389 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-mkv0tz0r
-2022-08-26 14:13:12,389 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:12,390 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43523
-2022-08-26 14:13:12,390 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43523
-2022-08-26 14:13:12,390 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:13:12,390 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36685
-2022-08-26 14:13:12,390 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39341
-2022-08-26 14:13:12,390 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:12,390 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:13:12,390 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:12,390 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-cz1_489s
-2022-08-26 14:13:12,390 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:12,393 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37421', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:12,393 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37421
-2022-08-26 14:13:12,393 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:12,394 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:43523', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:12,394 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:43523
-2022-08-26 14:13:12,394 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:12,394 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39341
-2022-08-26 14:13:12,394 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:12,395 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39341
-2022-08-26 14:13:12,395 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:12,395 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:12,395 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:12,409 - distributed.scheduler - INFO - Receive client connection: Client-e491e68b-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:12,409 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:12,639 - distributed.scheduler - INFO - Remove client Client-e491e68b-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:12,640 - distributed.scheduler - INFO - Remove client Client-e491e68b-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:12,640 - distributed.scheduler - INFO - Close client connection: Client-e491e68b-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:12,641 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:37421
-2022-08-26 14:13:12,641 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:43523
-2022-08-26 14:13:12,643 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:37421', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:12,643 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:37421
-2022-08-26 14:13:12,643 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:43523', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:12,643 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:43523
-2022-08-26 14:13:12,643 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:13:12,643 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-c6823604-8bc1-42a6-bbf6-9e3287265924 Address tcp://127.0.0.1:37421 Status: Status.closing
-2022-08-26 14:13:12,644 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-f943a92e-0101-4d0d-aac3-4eb97818b67c Address tcp://127.0.0.1:43523 Status: Status.closing
-2022-08-26 14:13:12,645 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:13:12,645 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_process_executor_kills_process 2022-08-26 14:13:12,896 - distributed.scheduler - INFO - State start
-2022-08-26 14:13:12,898 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:13:12,898 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:39869
-2022-08-26 14:13:12,898 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:38971
-2022-08-26 14:13:12,901 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43059
-2022-08-26 14:13:12,901 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43059
-2022-08-26 14:13:12,901 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:13:12,901 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34313
-2022-08-26 14:13:12,901 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39869
-2022-08-26 14:13:12,901 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:12,901 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:13:12,901 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:12,901 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-4j4x6t9u
-2022-08-26 14:13:12,901 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:12,903 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:43059', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:12,904 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:43059
-2022-08-26 14:13:12,904 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:12,904 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39869
-2022-08-26 14:13:12,904 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:12,904 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:12,917 - distributed.scheduler - INFO - Receive client connection: Client-e4df8ab7-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:12,918 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:13,093 - distributed.worker - ERROR - Exception during execution of task kill_process-f581f22c19cae1e44bd089fbfeace88b.
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 2212, in execute
-    result = await self.loop.run_in_executor(
-concurrent.futures.process.BrokenProcessPool: A process in the process pool was terminated abruptly while the future was running or pending.
-2022-08-26 14:13:13,099 - distributed.worker - ERROR - Exception during execution of task kill_process-f581f22c19cae1e44bd089fbfeace88b.
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 2212, in execute
-    result = await self.loop.run_in_executor(
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/tornado/platform/asyncio.py", line 277, in run_in_executor
-    return self.asyncio_loop.run_in_executor(executor, func, *args)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/base_events.py", line 818, in run_in_executor
-    executor.submit(func, *args), loop=self)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/concurrent/futures/process.py", line 715, in submit
-    raise BrokenProcessPool(self._broken)
-concurrent.futures.process.BrokenProcessPool: A child process terminated abruptly, the process pool is not usable anymore
-2022-08-26 14:13:13,108 - distributed.worker - ERROR - Exception during execution of task inc-03d935909bba38f9a49655e867cbf56a.
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 2212, in execute
-    result = await self.loop.run_in_executor(
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/tornado/platform/asyncio.py", line 277, in run_in_executor
-    return self.asyncio_loop.run_in_executor(executor, func, *args)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/base_events.py", line 818, in run_in_executor
-    executor.submit(func, *args), loop=self)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/concurrent/futures/process.py", line 715, in submit
-    raise BrokenProcessPool(self._broken)
-concurrent.futures.process.BrokenProcessPool: A child process terminated abruptly, the process pool is not usable anymore
-2022-08-26 14:13:13,114 - distributed.worker - ERROR - Exception during execution of task inc-03d935909bba38f9a49655e867cbf56a.
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 2212, in execute
-    result = await self.loop.run_in_executor(
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/tornado/platform/asyncio.py", line 277, in run_in_executor
-    return self.asyncio_loop.run_in_executor(executor, func, *args)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/base_events.py", line 818, in run_in_executor
-    executor.submit(func, *args), loop=self)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/concurrent/futures/process.py", line 715, in submit
-    raise BrokenProcessPool(self._broken)
-concurrent.futures.process.BrokenProcessPool: A child process terminated abruptly, the process pool is not usable anymore
-2022-08-26 14:13:13,131 - distributed.scheduler - INFO - Remove client Client-e4df8ab7-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:13,131 - distributed.scheduler - INFO - Remove client Client-e4df8ab7-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:13,132 - distributed.scheduler - INFO - Close client connection: Client-e4df8ab7-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:13,133 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:43059
-2022-08-26 14:13:13,134 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:43059', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:13,134 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:43059
-2022-08-26 14:13:13,134 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:13:13,134 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-a7897e54-4e77-49c8-9e8c-4376f42c8a06 Address tcp://127.0.0.1:43059 Status: Status.closing
-2022-08-26 14:13:13,135 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:13:13,135 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_process_executor_raise_exception 2022-08-26 14:13:13,379 - distributed.scheduler - INFO - State start
-2022-08-26 14:13:13,380 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:13:13,381 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:39569
-2022-08-26 14:13:13,381 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:39807
-2022-08-26 14:13:13,385 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33929
-2022-08-26 14:13:13,385 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33929
-2022-08-26 14:13:13,385 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:13:13,385 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36983
-2022-08-26 14:13:13,385 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39569
-2022-08-26 14:13:13,385 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:13,385 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:13:13,386 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:13,386 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-s3cno62p
-2022-08-26 14:13:13,386 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:13,386 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34147
-2022-08-26 14:13:13,386 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34147
-2022-08-26 14:13:13,386 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:13:13,386 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42083
-2022-08-26 14:13:13,386 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39569
-2022-08-26 14:13:13,386 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:13,386 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:13:13,386 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:13,387 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ok1w3qol
-2022-08-26 14:13:13,387 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:13,390 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33929', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:13,390 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33929
-2022-08-26 14:13:13,390 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:13,390 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:34147', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:13,391 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:34147
-2022-08-26 14:13:13,391 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:13,391 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39569
-2022-08-26 14:13:13,391 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:13,391 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39569
-2022-08-26 14:13:13,391 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:13,392 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:13,392 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:13,405 - distributed.scheduler - INFO - Receive client connection: Client-e529f53a-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:13,406 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:13,562 - distributed.worker - WARNING - Compute Failed
-Key:       raise_exc-32b173d8b530a5b6a3cce58585f5784e
-Function:  raise_exc
-args:      ()
-kwargs:    {}
-Exception: "RuntimeError('foo')"
-
-2022-08-26 14:13:13,569 - distributed.worker - WARNING - Compute Failed
-Key:       raise_exc-32b173d8b530a5b6a3cce58585f5784e
-Function:  raise_exc
-args:      ()
-kwargs:    {}
-Exception: "RuntimeError('foo')"
-
-2022-08-26 14:13:13,618 - distributed.scheduler - INFO - Remove client Client-e529f53a-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:13,618 - distributed.scheduler - INFO - Remove client Client-e529f53a-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:13,619 - distributed.scheduler - INFO - Close client connection: Client-e529f53a-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:13,619 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:33929
-2022-08-26 14:13:13,620 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:34147
-2022-08-26 14:13:13,621 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:33929', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:13,621 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:33929
-2022-08-26 14:13:13,621 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:34147', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:13,621 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:34147
-2022-08-26 14:13:13,621 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:13:13,621 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-e30a671b-f5a4-4d84-9655-e15840d8764e Address tcp://127.0.0.1:33929 Status: Status.closing
-2022-08-26 14:13:13,622 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-3e6feb91-a537-4ca8-9713-7a0fc419b9bb Address tcp://127.0.0.1:34147 Status: Status.closing
-2022-08-26 14:13:13,623 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:13:13,623 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_gpu_executor 2022-08-26 14:13:13,869 - distributed.scheduler - INFO - State start
-2022-08-26 14:13:13,871 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:13:13,871 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:38663
-2022-08-26 14:13:13,871 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:41731
-2022-08-26 14:13:13,874 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39889
-2022-08-26 14:13:13,874 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39889
-2022-08-26 14:13:13,874 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:13:13,874 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39217
-2022-08-26 14:13:13,874 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:38663
-2022-08-26 14:13:13,874 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:13,874 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:13:13,874 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:13,874 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-tbjnkyb0
-2022-08-26 14:13:13,874 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:13,876 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39889', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:13,877 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39889
-2022-08-26 14:13:13,877 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:13,877 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:38663
-2022-08-26 14:13:13,877 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:13,877 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:13,891 - distributed.scheduler - INFO - Receive client connection: Client-e5740359-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:13,891 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:13,902 - distributed.scheduler - INFO - Remove client Client-e5740359-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:13,902 - distributed.scheduler - INFO - Remove client Client-e5740359-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:13,903 - distributed.scheduler - INFO - Close client connection: Client-e5740359-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:13,903 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:39889
-2022-08-26 14:13:13,904 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:39889', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:13,904 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:39889
-2022-08-26 14:13:13,904 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:13:13,904 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-7e7debb3-a882-4e43-9d05-7db10ae1de5c Address tcp://127.0.0.1:39889 Status: Status.closing
-2022-08-26 14:13:13,905 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:13:13,905 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_worker_state_error_release_error_last 2022-08-26 14:13:14,139 - distributed.scheduler - INFO - State start
-2022-08-26 14:13:14,141 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:13:14,141 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:42735
-2022-08-26 14:13:14,141 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:44219
-2022-08-26 14:13:14,145 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38023
-2022-08-26 14:13:14,145 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38023
-2022-08-26 14:13:14,146 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:13:14,146 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40317
-2022-08-26 14:13:14,146 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:42735
-2022-08-26 14:13:14,146 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:14,146 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:13:14,146 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:14,146 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-xblxwd49
-2022-08-26 14:13:14,146 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:14,146 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33895
-2022-08-26 14:13:14,146 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33895
-2022-08-26 14:13:14,147 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:13:14,147 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36969
-2022-08-26 14:13:14,147 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:42735
-2022-08-26 14:13:14,147 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:14,147 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:13:14,147 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:14,147 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-l8j2rq_z
-2022-08-26 14:13:14,147 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:14,150 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38023', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:14,150 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38023
-2022-08-26 14:13:14,150 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:14,150 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33895', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:14,151 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33895
-2022-08-26 14:13:14,151 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:14,151 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:42735
-2022-08-26 14:13:14,151 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:14,151 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:42735
-2022-08-26 14:13:14,151 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:14,152 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:14,152 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:14,165 - distributed.scheduler - INFO - Receive client connection: Client-e59df07e-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:14,166 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:14,187 - distributed.worker - WARNING - Compute Failed
-Key:       raise_exc-be1e6d36af8e4ee238a38a67d8f37f33
-Function:  raise_exc
-args:      (2, 2)
-kwargs:    {}
-Exception: 'RuntimeError()'
-
-2022-08-26 14:13:14,212 - distributed.scheduler - INFO - Remove client Client-e59df07e-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:14,212 - distributed.scheduler - INFO - Remove client Client-e59df07e-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:14,212 - distributed.scheduler - INFO - Close client connection: Client-e59df07e-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:14,213 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:38023
-2022-08-26 14:13:14,213 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:33895
-2022-08-26 14:13:14,214 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:38023', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:14,214 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:38023
-2022-08-26 14:13:14,214 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:33895', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:14,214 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:33895
-2022-08-26 14:13:14,214 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:13:14,214 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-864bc943-176a-4ca5-82b5-c3e20af63402 Address tcp://127.0.0.1:38023 Status: Status.closing
-2022-08-26 14:13:14,215 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-d6906543-bcf5-43f9-9736-3e8227243842 Address tcp://127.0.0.1:33895 Status: Status.closing
-2022-08-26 14:13:14,216 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:13:14,216 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_worker_state_error_release_error_first 2022-08-26 14:13:14,451 - distributed.scheduler - INFO - State start
-2022-08-26 14:13:14,452 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:13:14,453 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:34781
-2022-08-26 14:13:14,453 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:44035
-2022-08-26 14:13:14,457 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34051
-2022-08-26 14:13:14,457 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34051
-2022-08-26 14:13:14,457 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:13:14,457 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39527
-2022-08-26 14:13:14,457 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34781
-2022-08-26 14:13:14,457 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:14,458 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:13:14,458 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:14,458 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-uwld21xd
-2022-08-26 14:13:14,458 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:14,458 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43511
-2022-08-26 14:13:14,458 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43511
-2022-08-26 14:13:14,458 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:13:14,458 - distributed.worker - INFO -          dashboard at:            127.0.0.1:38475
-2022-08-26 14:13:14,458 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34781
-2022-08-26 14:13:14,458 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:14,459 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:13:14,459 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:14,459 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-jvcphwdz
-2022-08-26 14:13:14,459 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:14,461 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:34051', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:14,462 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:34051
-2022-08-26 14:13:14,462 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:14,462 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:43511', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:14,462 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:43511
-2022-08-26 14:13:14,463 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:14,463 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34781
-2022-08-26 14:13:14,463 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:14,463 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34781
-2022-08-26 14:13:14,463 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:14,463 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:14,463 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:14,477 - distributed.scheduler - INFO - Receive client connection: Client-e5cd84c4-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:14,477 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:14,499 - distributed.worker - WARNING - Compute Failed
-Key:       raise_exc-dd834e1035a2ab0878c8d70842279e70
-Function:  raise_exc
-args:      (2, 2)
-kwargs:    {}
-Exception: 'RuntimeError()'
-
-2022-08-26 14:13:14,524 - distributed.scheduler - INFO - Remove client Client-e5cd84c4-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:14,524 - distributed.scheduler - INFO - Remove client Client-e5cd84c4-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:14,524 - distributed.scheduler - INFO - Close client connection: Client-e5cd84c4-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:14,525 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:34051
-2022-08-26 14:13:14,525 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:43511
-2022-08-26 14:13:14,526 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:34051', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:14,526 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:34051
-2022-08-26 14:13:14,526 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:43511', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:14,526 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:43511
-2022-08-26 14:13:14,526 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:13:14,526 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-6e1968fa-2615-47bb-8672-d0f6954340ff Address tcp://127.0.0.1:34051 Status: Status.closing
-2022-08-26 14:13:14,527 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-a8f771dc-94e7-4922-8dc9-54b67f34a996 Address tcp://127.0.0.1:43511 Status: Status.closing
-2022-08-26 14:13:14,528 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:13:14,528 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_worker_state_error_release_error_int 2022-08-26 14:13:14,763 - distributed.scheduler - INFO - State start
-2022-08-26 14:13:14,765 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:13:14,765 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:41387
-2022-08-26 14:13:14,765 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:33713
-2022-08-26 14:13:14,770 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40523
-2022-08-26 14:13:14,770 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40523
-2022-08-26 14:13:14,770 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:13:14,770 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45199
-2022-08-26 14:13:14,770 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:41387
-2022-08-26 14:13:14,770 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:14,770 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:13:14,770 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:14,770 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-0i3cptac
-2022-08-26 14:13:14,770 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:14,771 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43079
-2022-08-26 14:13:14,771 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43079
-2022-08-26 14:13:14,771 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:13:14,771 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35435
-2022-08-26 14:13:14,771 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:41387
-2022-08-26 14:13:14,771 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:14,771 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:13:14,771 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:14,771 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-if136ko6
-2022-08-26 14:13:14,771 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:14,774 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40523', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:14,775 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40523
-2022-08-26 14:13:14,775 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:14,775 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:43079', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:14,775 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:43079
-2022-08-26 14:13:14,775 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:14,776 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:41387
-2022-08-26 14:13:14,776 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:14,776 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:41387
-2022-08-26 14:13:14,776 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:14,776 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:14,776 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:14,790 - distributed.scheduler - INFO - Receive client connection: Client-e5fd4127-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:14,790 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:14,812 - distributed.worker - WARNING - Compute Failed
-Key:       raise_exc-dd038f1fe765bb2e0c881046eff950e2
-Function:  raise_exc
-args:      (2, 2)
-kwargs:    {}
-Exception: 'RuntimeError()'
-
-2022-08-26 14:13:14,836 - distributed.scheduler - INFO - Remove client Client-e5fd4127-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:14,836 - distributed.scheduler - INFO - Remove client Client-e5fd4127-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:14,836 - distributed.scheduler - INFO - Close client connection: Client-e5fd4127-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:14,837 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:40523
-2022-08-26 14:13:14,837 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:43079
-2022-08-26 14:13:14,838 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:40523', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:14,838 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:40523
-2022-08-26 14:13:14,838 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:43079', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:14,838 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:43079
-2022-08-26 14:13:14,838 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:13:14,838 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-1656dc2b-8c4d-40b4-9efa-f67a75f8b7ea Address tcp://127.0.0.1:40523 Status: Status.closing
-2022-08-26 14:13:14,839 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-2be36805-54a1-4a76-bcca-f54db5a73926 Address tcp://127.0.0.1:43079 Status: Status.closing
-2022-08-26 14:13:14,840 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:13:14,840 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_worker_state_error_long_chain 2022-08-26 14:13:15,077 - distributed.scheduler - INFO - State start
-2022-08-26 14:13:15,079 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:13:15,079 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:43915
-2022-08-26 14:13:15,079 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:34529
-2022-08-26 14:13:15,084 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38537
-2022-08-26 14:13:15,084 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38537
-2022-08-26 14:13:15,084 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:13:15,084 - distributed.worker - INFO -          dashboard at:            127.0.0.1:44813
-2022-08-26 14:13:15,084 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43915
-2022-08-26 14:13:15,084 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:15,084 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:13:15,084 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:15,084 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-f00hv4xp
-2022-08-26 14:13:15,084 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:15,085 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:46225
-2022-08-26 14:13:15,085 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:46225
-2022-08-26 14:13:15,085 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:13:15,085 - distributed.worker - INFO -          dashboard at:            127.0.0.1:44617
-2022-08-26 14:13:15,085 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43915
-2022-08-26 14:13:15,085 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:15,085 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:13:15,085 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:15,085 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-rmqm3aa1
-2022-08-26 14:13:15,085 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:15,088 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38537', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:15,088 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38537
-2022-08-26 14:13:15,088 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:15,089 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:46225', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:15,089 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:46225
-2022-08-26 14:13:15,089 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:15,089 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43915
-2022-08-26 14:13:15,089 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:15,090 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43915
-2022-08-26 14:13:15,090 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:15,090 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:15,090 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:15,104 - distributed.scheduler - INFO - Receive client connection: Client-e62d216a-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:15,104 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:15,131 - distributed.worker - WARNING - Compute Failed
-Key:       res
-Function:  raise_exc
-args:      (2, 3)
-kwargs:    {}
-Exception: 'RuntimeError()'
-
-2022-08-26 14:13:15,446 - distributed.scheduler - INFO - Remove client Client-e62d216a-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:15,447 - distributed.scheduler - INFO - Remove client Client-e62d216a-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:15,447 - distributed.scheduler - INFO - Close client connection: Client-e62d216a-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:15,447 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:38537
-2022-08-26 14:13:15,448 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:46225
-2022-08-26 14:13:15,449 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:38537', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:15,449 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:38537
-2022-08-26 14:13:15,449 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:46225', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:15,449 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:46225
-2022-08-26 14:13:15,449 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:13:15,449 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-433dfb3e-e9b1-4ae4-a250-f0ff91c56f46 Address tcp://127.0.0.1:38537 Status: Status.closing
-2022-08-26 14:13:15,450 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-f9c08f58-f513-47a4-9bad-db1dc5d40471 Address tcp://127.0.0.1:46225 Status: Status.closing
-2022-08-26 14:13:15,451 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:13:15,451 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_hold_on_to_replicas 2022-08-26 14:13:15,686 - distributed.scheduler - INFO - State start
-2022-08-26 14:13:15,687 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:13:15,687 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:36969
-2022-08-26 14:13:15,687 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:37549
-2022-08-26 14:13:15,696 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37555
-2022-08-26 14:13:15,696 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37555
-2022-08-26 14:13:15,696 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:13:15,696 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43527
-2022-08-26 14:13:15,696 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36969
-2022-08-26 14:13:15,696 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:15,696 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:13:15,696 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:15,696 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-rl233m2o
-2022-08-26 14:13:15,696 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:15,697 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39305
-2022-08-26 14:13:15,697 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39305
-2022-08-26 14:13:15,697 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:13:15,697 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36267
-2022-08-26 14:13:15,697 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36969
-2022-08-26 14:13:15,697 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:15,697 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:13:15,697 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:15,697 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-oggbc22d
-2022-08-26 14:13:15,697 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:15,698 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43975
-2022-08-26 14:13:15,698 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43975
-2022-08-26 14:13:15,698 - distributed.worker - INFO -           Worker name:                          2
-2022-08-26 14:13:15,698 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34829
-2022-08-26 14:13:15,698 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36969
-2022-08-26 14:13:15,698 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:15,698 - distributed.worker - INFO -               Threads:                          3
-2022-08-26 14:13:15,698 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:15,699 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-j7ly4r5p
-2022-08-26 14:13:15,699 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:15,699 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40073
-2022-08-26 14:13:15,699 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40073
-2022-08-26 14:13:15,699 - distributed.worker - INFO -           Worker name:                          3
-2022-08-26 14:13:15,699 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43913
-2022-08-26 14:13:15,699 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36969
-2022-08-26 14:13:15,699 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:15,700 - distributed.worker - INFO -               Threads:                          4
-2022-08-26 14:13:15,700 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:15,700 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-n1prj4cz
-2022-08-26 14:13:15,700 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:15,704 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37555', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:15,705 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37555
-2022-08-26 14:13:15,705 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:15,705 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39305', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:15,705 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39305
-2022-08-26 14:13:15,706 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:15,706 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:43975', name: 2, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:15,706 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:43975
-2022-08-26 14:13:15,706 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:15,707 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40073', name: 3, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:15,707 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40073
-2022-08-26 14:13:15,707 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:15,707 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36969
-2022-08-26 14:13:15,707 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:15,708 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36969
-2022-08-26 14:13:15,708 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:15,708 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36969
-2022-08-26 14:13:15,708 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:15,708 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36969
-2022-08-26 14:13:15,708 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:15,708 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:15,709 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:15,709 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:15,709 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:15,723 - distributed.scheduler - INFO - Receive client connection: Client-e68b9929-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:15,723 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:16,066 - distributed.scheduler - INFO - Remove client Client-e68b9929-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:16,066 - distributed.scheduler - INFO - Remove client Client-e68b9929-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:16,067 - distributed.scheduler - INFO - Close client connection: Client-e68b9929-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:16,067 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:37555
-2022-08-26 14:13:16,067 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:39305
-2022-08-26 14:13:16,068 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:43975
-2022-08-26 14:13:16,068 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:40073
-2022-08-26 14:13:16,069 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:37555', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:16,069 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:37555
-2022-08-26 14:13:16,070 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:39305', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:16,070 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:39305
-2022-08-26 14:13:16,070 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:43975', name: 2, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:16,070 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:43975
-2022-08-26 14:13:16,070 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:40073', name: 3, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:16,070 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:40073
-2022-08-26 14:13:16,070 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:13:16,070 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-75568cb3-e3fe-4ebe-9da9-ecab6fc33889 Address tcp://127.0.0.1:37555 Status: Status.closing
-2022-08-26 14:13:16,071 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-36ce7e2e-ee6e-4e83-848a-c4bf48a11658 Address tcp://127.0.0.1:39305 Status: Status.closing
-2022-08-26 14:13:16,071 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-d417735d-3220-43bb-b42e-c30c7ef46788 Address tcp://127.0.0.1:43975 Status: Status.closing
-2022-08-26 14:13:16,071 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-0bf4e4a0-2c38-4b09-bc1f-6b8956e497e3 Address tcp://127.0.0.1:40073 Status: Status.closing
-2022-08-26 14:13:16,074 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:13:16,074 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_forget_dependents_after_release 2022-08-26 14:13:16,309 - distributed.scheduler - INFO - State start
-2022-08-26 14:13:16,310 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:13:16,311 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:35759
-2022-08-26 14:13:16,311 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:34757
-2022-08-26 14:13:16,313 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40175
-2022-08-26 14:13:16,313 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40175
-2022-08-26 14:13:16,314 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:13:16,314 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41617
-2022-08-26 14:13:16,314 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35759
-2022-08-26 14:13:16,314 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:16,314 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:13:16,314 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:16,314 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-nj48ley8
-2022-08-26 14:13:16,314 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:16,316 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40175', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:16,316 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40175
-2022-08-26 14:13:16,316 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:16,317 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35759
-2022-08-26 14:13:16,317 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:16,317 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:16,330 - distributed.scheduler - INFO - Receive client connection: Client-e6e846da-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:16,330 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:16,362 - distributed.scheduler - INFO - Remove client Client-e6e846da-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:16,363 - distributed.scheduler - INFO - Remove client Client-e6e846da-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:16,363 - distributed.scheduler - INFO - Close client connection: Client-e6e846da-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:16,364 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:40175
-2022-08-26 14:13:16,364 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-7bb88d95-120e-4002-8788-73c24546b64b Address tcp://127.0.0.1:40175 Status: Status.closing
-2022-08-26 14:13:16,365 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:40175', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:16,365 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:40175
-2022-08-26 14:13:16,365 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:13:16,365 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:13:16,366 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_steal_during_task_deserialization 2022-08-26 14:13:16,600 - distributed.scheduler - INFO - State start
-2022-08-26 14:13:16,601 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:13:16,601 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:41203
-2022-08-26 14:13:16,601 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:43077
-2022-08-26 14:13:16,606 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36317
-2022-08-26 14:13:16,606 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36317
-2022-08-26 14:13:16,606 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:13:16,606 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45685
-2022-08-26 14:13:16,606 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:41203
-2022-08-26 14:13:16,606 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:16,606 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:13:16,606 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:16,607 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-95wut84v
-2022-08-26 14:13:16,607 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:16,607 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40197
-2022-08-26 14:13:16,607 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40197
-2022-08-26 14:13:16,607 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:13:16,607 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39451
-2022-08-26 14:13:16,607 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:41203
-2022-08-26 14:13:16,607 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:16,607 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:13:16,608 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:16,608 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-076sc620
-2022-08-26 14:13:16,608 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:16,611 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36317', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:16,611 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36317
-2022-08-26 14:13:16,611 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:16,611 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40197', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:16,612 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40197
-2022-08-26 14:13:16,612 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:16,612 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:41203
-2022-08-26 14:13:16,612 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:16,612 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:41203
-2022-08-26 14:13:16,612 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:16,613 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:16,613 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:16,626 - distributed.scheduler - INFO - Receive client connection: Client-e715721a-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:16,626 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:16,759 - distributed.scheduler - INFO - Remove client Client-e715721a-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:16,759 - distributed.scheduler - INFO - Remove client Client-e715721a-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:16,760 - distributed.scheduler - INFO - Close client connection: Client-e715721a-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:16,760 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36317
-2022-08-26 14:13:16,761 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:40197
-2022-08-26 14:13:16,761 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:36317', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:16,762 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:36317
-2022-08-26 14:13:16,762 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:40197', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:16,762 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:40197
-2022-08-26 14:13:16,762 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:13:16,762 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-5ec442ff-8957-416e-8fd9-ecf1b9ef0f62 Address tcp://127.0.0.1:36317 Status: Status.closing
-2022-08-26 14:13:16,762 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-df41bf52-ac1a-48db-a52b-db9ae78cc238 Address tcp://127.0.0.1:40197 Status: Status.closing
-2022-08-26 14:13:16,764 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:13:16,764 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_run_spec_deserialize_fail 2022-08-26 14:13:16,998 - distributed.scheduler - INFO - State start
-2022-08-26 14:13:16,999 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:13:16,999 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:37193
-2022-08-26 14:13:17,000 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:35431
-2022-08-26 14:13:17,004 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39527
-2022-08-26 14:13:17,004 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39527
-2022-08-26 14:13:17,004 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:13:17,004 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39805
-2022-08-26 14:13:17,004 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37193
-2022-08-26 14:13:17,004 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:17,004 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:13:17,004 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:17,005 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-n1393stx
-2022-08-26 14:13:17,005 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:17,005 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33579
-2022-08-26 14:13:17,005 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33579
-2022-08-26 14:13:17,005 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:13:17,005 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41991
-2022-08-26 14:13:17,005 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37193
-2022-08-26 14:13:17,005 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:17,005 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:13:17,006 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:17,006 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-he1eqopa
-2022-08-26 14:13:17,006 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:17,009 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39527', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:17,009 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39527
-2022-08-26 14:13:17,009 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:17,009 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33579', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:17,010 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33579
-2022-08-26 14:13:17,010 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:17,010 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37193
-2022-08-26 14:13:17,010 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:17,010 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37193
-2022-08-26 14:13:17,010 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:17,010 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:17,010 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:17,024 - distributed.scheduler - INFO - Receive client connection: Client-e75227ba-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:17,024 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:17,036 - distributed.protocol.pickle - INFO - Failed to deserialize b'\x80\x04\x95\xd0\x02\x00\x00\x00\x00\x00\x00\x8c\x17cloudpickle.cloudpickle\x94\x8c\x0e_make_function\x94\x93\x94(h\x00\x8c\r_builtin_type\x94\x93\x94\x8c\x08CodeType\x94\x85\x94R\x94(K\x00K\x00K\x00K\x00K\x02JS\x00\x00\x01C\x08d\x01d\x02\x1b\x00S\x00\x94NK\x01K\x00\x87\x94))\x8cg/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/tests/test_worker.py\x94\x8c\x08<lambda>\x94Mi\nC\x02\x08\x00\x94))t\x94R\x94}\x94(\x8c\x0b__package__\x94\x8c\x00\x94\x8c\x08__name__\x94\x8c\x0btest_worker\x94\x8c\x08__file__\x94\x8cg/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/tests/test_worker.py\x94uNNNt\x94R\x94\x8c\x1ccloudpickle.cloudpickle_fast\x94\x8c\x12_function_setstate\x94\x93\x94h\x17}\x94}\x94(h\x12h\x0b\x8c\x0c__qualname__\x94\x8cFtest_run_spec_deserialize_fail.<locals>.F.__reduce__.<locals>.<lambda>\x94\x8c\x0f__annotations__\
x94}\x94\x8c\x0e__kwdefaults__\x94N\x8c\x0c__defaults__\x94N\x8c\n__module__\x94h\x13\x8c\x07__doc__\x94N\x8c\x0b__closure__\x94N\x8c\x17_cloudpickle_submodules\x94]\x94\x8c\x0b__globals__\x94}\x94u\x86\x94\x86R0)R\x94.'
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 2744, in loads_function
-    result = cache_loads[bytes_object]
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/collections.py", line 23, in __getitem__
-    value = super().__getitem__(key)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/collections/__init__.py", line 1106, in __getitem__
-    raise KeyError(key)
-KeyError: b'\x80\x04\x95\xd0\x02\x00\x00\x00\x00\x00\x00\x8c\x17cloudpickle.cloudpickle\x94\x8c\x0e_make_function\x94\x93\x94(h\x00\x8c\r_builtin_type\x94\x93\x94\x8c\x08CodeType\x94\x85\x94R\x94(K\x00K\x00K\x00K\x00K\x02JS\x00\x00\x01C\x08d\x01d\x02\x1b\x00S\x00\x94NK\x01K\x00\x87\x94))\x8cg/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/tests/test_worker.py\x94\x8c\x08<lambda>\x94Mi\nC\x02\x08\x00\x94))t\x94R\x94}\x94(\x8c\x0b__package__\x94\x8c\x00\x94\x8c\x08__name__\x94\x8c\x0btest_worker\x94\x8c\x08__file__\x94\x8cg/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/tests/test_worker.py\x94uNNNt\x94R\x94\x8c\x1ccloudpickle.cloudpickle_fast\x94\x8c\x12_function_setstate\x94\x93\x94h\x17}\x94}\x94(h\x12h\x0b\x8c\x0c__qualname__\x94\x8cFtest_run_spec_deserialize_fail.<locals>.F.__reduce__.<locals>.<lambda>\x94\x8c\x0f__annotations__\x94}\x94\x8c\x0e__kwdefaults__\x94N\x8c\x0c__defaults__\x94N\x8c\n__module_
_\x94h\x13\x8c\x07__doc__\x94N\x8c\x0b__closure__\x94N\x8c\x17_cloudpickle_submodules\x94]\x94\x8c\x0b__globals__\x94}\x94u\x86\x94\x86R0)R\x94.'
-
-During handling of the above exception, another exception occurred:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/pickle.py", line 73, in loads
-    return pickle.loads(x)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/tests/test_worker.py", line 2665, in <lambda>
-    return lambda: 1 / 0, ()
-ZeroDivisionError: division by zero
-2022-08-26 14:13:17,037 - distributed.worker - ERROR - Could not deserialize task <test_worker.test_run_spec_deserialize_fail.<local-b29c35cd33818f315bb16d276cb7c3d5
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 2744, in loads_function
-    result = cache_loads[bytes_object]
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/collections.py", line 23, in __getitem__
-    value = super().__getitem__(key)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/collections/__init__.py", line 1106, in __getitem__
-    raise KeyError(key)
-KeyError: b'\x80\x04\x95\xd0\x02\x00\x00\x00\x00\x00\x00\x8c\x17cloudpickle.cloudpickle\x94\x8c\x0e_make_function\x94\x93\x94(h\x00\x8c\r_builtin_type\x94\x93\x94\x8c\x08CodeType\x94\x85\x94R\x94(K\x00K\x00K\x00K\x00K\x02JS\x00\x00\x01C\x08d\x01d\x02\x1b\x00S\x00\x94NK\x01K\x00\x87\x94))\x8cg/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/tests/test_worker.py\x94\x8c\x08<lambda>\x94Mi\nC\x02\x08\x00\x94))t\x94R\x94}\x94(\x8c\x0b__package__\x94\x8c\x00\x94\x8c\x08__name__\x94\x8c\x0btest_worker\x94\x8c\x08__file__\x94\x8cg/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/tests/test_worker.py\x94uNNNt\x94R\x94\x8c\x1ccloudpickle.cloudpickle_fast\x94\x8c\x12_function_setstate\x94\x93\x94h\x17}\x94}\x94(h\x12h\x0b\x8c\x0c__qualname__\x94\x8cFtest_run_spec_deserialize_fail.<locals>.F.__reduce__.<locals>.<lambda>\x94\x8c\x0f__annotations__\x94}\x94\x8c\x0e__kwdefaults__\x94N\x8c\x0c__defaults__\x94N\x8c\n__module_
_\x94h\x13\x8c\x07__doc__\x94N\x8c\x0b__closure__\x94N\x8c\x17_cloudpickle_submodules\x94]\x94\x8c\x0b__globals__\x94}\x94u\x86\x94\x86R0)R\x94.'
-
-During handling of the above exception, another exception occurred:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 2161, in execute
-    function, args, kwargs = await self._maybe_deserialize_task(ts)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 2134, in _maybe_deserialize_task
-    function, args, kwargs = _deserialize(*ts.run_spec)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 2755, in _deserialize
-    function = loads_function(function)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 2746, in loads_function
-    result = pickle.loads(bytes_object)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/pickle.py", line 73, in loads
-    return pickle.loads(x)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/tests/test_worker.py", line 2665, in <lambda>
-    return lambda: 1 / 0, ()
-ZeroDivisionError: division by zero
-2022-08-26 14:13:17,047 - distributed.scheduler - INFO - Remove client Client-e75227ba-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:17,047 - distributed.scheduler - INFO - Remove client Client-e75227ba-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:17,047 - distributed.scheduler - INFO - Close client connection: Client-e75227ba-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:17,048 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:39527
-2022-08-26 14:13:17,048 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:33579
-2022-08-26 14:13:17,049 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:39527', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:17,049 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:39527
-2022-08-26 14:13:17,049 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:33579', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:17,049 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:33579
-2022-08-26 14:13:17,049 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:13:17,050 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-bc4b02bc-b448-42b5-9d47-a6342a83102b Address tcp://127.0.0.1:39527 Status: Status.closing
-2022-08-26 14:13:17,050 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-2c7bd361-6c03-422e-bb23-8362f2a33d67 Address tcp://127.0.0.1:33579 Status: Status.closing
-2022-08-26 14:13:17,051 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:13:17,051 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_acquire_replicas 2022-08-26 14:13:17,284 - distributed.scheduler - INFO - State start
-2022-08-26 14:13:17,286 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:13:17,286 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:38991
-2022-08-26 14:13:17,286 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:38389
-2022-08-26 14:13:17,291 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:44643
-2022-08-26 14:13:17,291 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:44643
-2022-08-26 14:13:17,291 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:13:17,291 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34109
-2022-08-26 14:13:17,291 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:38991
-2022-08-26 14:13:17,291 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:17,291 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:13:17,291 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:17,291 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-bvsar7wt
-2022-08-26 14:13:17,291 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:17,292 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37961
-2022-08-26 14:13:17,292 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37961
-2022-08-26 14:13:17,292 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:13:17,292 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34451
-2022-08-26 14:13:17,292 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:38991
-2022-08-26 14:13:17,292 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:17,292 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:13:17,292 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:17,292 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-7uu3mlvd
-2022-08-26 14:13:17,292 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:17,295 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:44643', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:17,296 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:44643
-2022-08-26 14:13:17,296 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:17,296 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37961', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:17,296 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37961
-2022-08-26 14:13:17,296 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:17,297 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:38991
-2022-08-26 14:13:17,297 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:17,297 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:38991
-2022-08-26 14:13:17,297 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:17,297 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:17,297 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:17,311 - distributed.scheduler - INFO - Receive client connection: Client-e77de844-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:17,311 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:17,344 - distributed.scheduler - INFO - Remove client Client-e77de844-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:17,344 - distributed.scheduler - INFO - Remove client Client-e77de844-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:17,344 - distributed.scheduler - INFO - Close client connection: Client-e77de844-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:17,345 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:44643
-2022-08-26 14:13:17,345 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:37961
-2022-08-26 14:13:17,346 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:44643', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:17,346 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:44643
-2022-08-26 14:13:17,346 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:37961', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:17,346 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:37961
-2022-08-26 14:13:17,346 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:13:17,346 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-85a1805b-f051-409c-9cdd-859c3ebd827a Address tcp://127.0.0.1:44643 Status: Status.closing
-2022-08-26 14:13:17,347 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-425a8b10-d58c-45ca-bac8-297b098247ee Address tcp://127.0.0.1:37961 Status: Status.closing
-2022-08-26 14:13:17,348 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:13:17,348 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_acquire_replicas_same_channel 2022-08-26 14:13:17,581 - distributed.scheduler - INFO - State start
-2022-08-26 14:13:17,583 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:13:17,583 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:39417
-2022-08-26 14:13:17,583 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:36861
-2022-08-26 14:13:17,588 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35171
-2022-08-26 14:13:17,588 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35171
-2022-08-26 14:13:17,588 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:13:17,588 - distributed.worker - INFO -          dashboard at:            127.0.0.1:38187
-2022-08-26 14:13:17,588 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39417
-2022-08-26 14:13:17,588 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:17,588 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:13:17,588 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:17,588 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-tmpa13mx
-2022-08-26 14:13:17,588 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:17,589 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33959
-2022-08-26 14:13:17,589 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33959
-2022-08-26 14:13:17,589 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:13:17,589 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43489
-2022-08-26 14:13:17,589 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:39417
-2022-08-26 14:13:17,589 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:17,589 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:13:17,589 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:17,589 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-7tdaaw4_
-2022-08-26 14:13:17,589 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:17,592 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:35171', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:17,593 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:35171
-2022-08-26 14:13:17,593 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:17,593 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33959', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:17,593 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33959
-2022-08-26 14:13:17,593 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:17,594 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39417
-2022-08-26 14:13:17,594 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:17,594 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:39417
-2022-08-26 14:13:17,594 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:17,594 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:17,594 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:17,608 - distributed.scheduler - INFO - Receive client connection: Client-e7ab3c83-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:17,608 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:17,645 - distributed.scheduler - INFO - Remove client Client-e7ab3c83-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:17,645 - distributed.scheduler - INFO - Remove client Client-e7ab3c83-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:17,645 - distributed.scheduler - INFO - Close client connection: Client-e7ab3c83-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:17,646 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:35171
-2022-08-26 14:13:17,646 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:33959
-2022-08-26 14:13:17,647 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:35171', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:17,647 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:35171
-2022-08-26 14:13:17,647 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:33959', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:17,648 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:33959
-2022-08-26 14:13:17,648 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:13:17,648 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-10873068-3d21-4a0a-a052-d848bdf7d768 Address tcp://127.0.0.1:35171 Status: Status.closing
-2022-08-26 14:13:17,648 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-489dbd8b-b0b0-44a9-8efe-8f3fe08e0767 Address tcp://127.0.0.1:33959 Status: Status.closing
-2022-08-26 14:13:17,649 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:13:17,649 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_acquire_replicas_many 2022-08-26 14:13:17,883 - distributed.scheduler - INFO - State start
-2022-08-26 14:13:17,885 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:13:17,885 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:44557
-2022-08-26 14:13:17,885 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:43787
-2022-08-26 14:13:17,891 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36639
-2022-08-26 14:13:17,891 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36639
-2022-08-26 14:13:17,891 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:13:17,891 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34959
-2022-08-26 14:13:17,891 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44557
-2022-08-26 14:13:17,891 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:17,891 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:13:17,892 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:17,892 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-6_afesg9
-2022-08-26 14:13:17,892 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:17,892 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43009
-2022-08-26 14:13:17,892 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43009
-2022-08-26 14:13:17,892 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:13:17,892 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43151
-2022-08-26 14:13:17,892 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44557
-2022-08-26 14:13:17,892 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:17,892 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:13:17,893 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:17,893 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-zep7y4kk
-2022-08-26 14:13:17,893 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:17,893 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36603
-2022-08-26 14:13:17,893 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36603
-2022-08-26 14:13:17,893 - distributed.worker - INFO -           Worker name:                          2
-2022-08-26 14:13:17,893 - distributed.worker - INFO -          dashboard at:            127.0.0.1:37483
-2022-08-26 14:13:17,893 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44557
-2022-08-26 14:13:17,893 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:17,893 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:13:17,894 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:17,894 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-gqo5b6gu
-2022-08-26 14:13:17,894 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:17,897 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36639', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:17,898 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36639
-2022-08-26 14:13:17,898 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:17,898 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:43009', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:17,898 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:43009
-2022-08-26 14:13:17,899 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:17,899 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36603', name: 2, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:17,899 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36603
-2022-08-26 14:13:17,899 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:17,899 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44557
-2022-08-26 14:13:17,900 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:17,900 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44557
-2022-08-26 14:13:17,900 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:17,900 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44557
-2022-08-26 14:13:17,900 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:17,900 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:17,900 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:17,901 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:17,914 - distributed.scheduler - INFO - Receive client connection: Client-e7d9ff2e-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:17,915 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:18,462 - distributed.scheduler - INFO - Remove client Client-e7d9ff2e-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:18,462 - distributed.scheduler - INFO - Remove client Client-e7d9ff2e-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:18,462 - distributed.scheduler - INFO - Close client connection: Client-e7d9ff2e-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:18,463 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36639
-2022-08-26 14:13:18,463 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:43009
-2022-08-26 14:13:18,463 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36603
-2022-08-26 14:13:18,465 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:36639', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:18,465 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:36639
-2022-08-26 14:13:18,465 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:43009', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:18,465 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:43009
-2022-08-26 14:13:18,465 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:36603', name: 2, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:18,465 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:36603
-2022-08-26 14:13:18,465 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:13:18,465 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-a8660c04-c532-4f3a-adbb-e56e9ba6a935 Address tcp://127.0.0.1:36639 Status: Status.closing
-2022-08-26 14:13:18,466 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-8f797b31-374e-4e40-bb54-55c96e8330c3 Address tcp://127.0.0.1:43009 Status: Status.closing
-2022-08-26 14:13:18,466 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-c28bb0fb-9aff-4e6b-96c5-e4d17f876e9c Address tcp://127.0.0.1:36603 Status: Status.closing
-2022-08-26 14:13:18,468 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:13:18,468 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_acquire_replicas_already_in_flight 2022-08-26 14:13:18,702 - distributed.scheduler - INFO - State start
-2022-08-26 14:13:18,704 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:13:18,704 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:46315
-2022-08-26 14:13:18,704 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:44699
-2022-08-26 14:13:18,707 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:41161
-2022-08-26 14:13:18,707 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:41161
-2022-08-26 14:13:18,707 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:13:18,707 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43605
-2022-08-26 14:13:18,707 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:46315
-2022-08-26 14:13:18,707 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:18,707 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:13:18,707 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:18,707 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-cgvztje5
-2022-08-26 14:13:18,707 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:18,709 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:41161', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:18,709 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:41161
-2022-08-26 14:13:18,709 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:18,710 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:46315
-2022-08-26 14:13:18,710 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:18,710 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:18,723 - distributed.scheduler - INFO - Receive client connection: Client-e85569f4-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:18,723 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:18,726 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40649
-2022-08-26 14:13:18,726 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40649
-2022-08-26 14:13:18,727 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34089
-2022-08-26 14:13:18,727 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:46315
-2022-08-26 14:13:18,727 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:18,727 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:13:18,727 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:18,727 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-kndzc2si
-2022-08-26 14:13:18,727 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:18,729 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40649', status: init, memory: 0, processing: 0>
-2022-08-26 14:13:18,729 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40649
-2022-08-26 14:13:18,729 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:18,729 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:46315
-2022-08-26 14:13:18,729 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:18,732 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:18,744 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:40649
-2022-08-26 14:13:18,745 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:40649', status: closing, memory: 2, processing: 0>
-2022-08-26 14:13:18,745 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:40649
-2022-08-26 14:13:18,745 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: BlockedGatherDep-d6bc0b0b-2886-486d-93e2-22a39ff44942 Address tcp://127.0.0.1:40649 Status: Status.closing
-2022-08-26 14:13:18,757 - distributed.scheduler - INFO - Remove client Client-e85569f4-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:18,757 - distributed.scheduler - INFO - Remove client Client-e85569f4-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:18,758 - distributed.scheduler - INFO - Close client connection: Client-e85569f4-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:18,758 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:41161
-2022-08-26 14:13:18,759 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:41161', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:18,759 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:41161
-2022-08-26 14:13:18,759 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:13:18,759 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-27458e31-56cf-434b-ae99-36874a299a62 Address tcp://127.0.0.1:41161 Status: Status.closing
-2022-08-26 14:13:18,759 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:13:18,759 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_forget_acquire_replicas SKIPPED
-distributed/tests/test_worker.py::test_remove_replicas_simple 2022-08-26 14:13:18,996 - distributed.scheduler - INFO - State start
-2022-08-26 14:13:18,997 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:13:18,997 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:37031
-2022-08-26 14:13:18,997 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:38133
-2022-08-26 14:13:19,002 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40489
-2022-08-26 14:13:19,002 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40489
-2022-08-26 14:13:19,002 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:13:19,002 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35351
-2022-08-26 14:13:19,002 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37031
-2022-08-26 14:13:19,002 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:19,002 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:13:19,002 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:19,002 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-lqjaab4c
-2022-08-26 14:13:19,002 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:19,003 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38641
-2022-08-26 14:13:19,003 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38641
-2022-08-26 14:13:19,003 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:13:19,003 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43201
-2022-08-26 14:13:19,003 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37031
-2022-08-26 14:13:19,003 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:19,003 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:13:19,003 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:19,003 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ek9pgti5
-2022-08-26 14:13:19,003 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:19,006 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40489', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:19,006 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40489
-2022-08-26 14:13:19,007 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:19,007 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38641', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:19,007 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38641
-2022-08-26 14:13:19,007 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:19,007 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37031
-2022-08-26 14:13:19,008 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:19,008 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37031
-2022-08-26 14:13:19,008 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:19,008 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:19,008 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:19,022 - distributed.scheduler - INFO - Receive client connection: Client-e882f950-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:19,022 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:19,088 - distributed.scheduler - INFO - Remove client Client-e882f950-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:19,088 - distributed.scheduler - INFO - Remove client Client-e882f950-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:19,088 - distributed.scheduler - INFO - Close client connection: Client-e882f950-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:19,088 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:40489
-2022-08-26 14:13:19,089 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:38641
-2022-08-26 14:13:19,090 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:40489', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:19,090 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:40489
-2022-08-26 14:13:19,090 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:38641', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:19,090 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:38641
-2022-08-26 14:13:19,090 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:13:19,090 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-8ddfb96e-d44d-40c9-a6e9-fbd565171ea1 Address tcp://127.0.0.1:40489 Status: Status.closing
-2022-08-26 14:13:19,090 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-0b051dab-b098-48dc-82ab-7295f1b43f9a Address tcp://127.0.0.1:38641 Status: Status.closing
-2022-08-26 14:13:19,091 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:13:19,092 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_remove_replicas_while_computing 2022-08-26 14:13:19,325 - distributed.scheduler - INFO - State start
-2022-08-26 14:13:19,327 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:13:19,327 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:33227
-2022-08-26 14:13:19,327 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:34775
-2022-08-26 14:13:19,332 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40683
-2022-08-26 14:13:19,332 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40683
-2022-08-26 14:13:19,332 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:13:19,332 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41359
-2022-08-26 14:13:19,332 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33227
-2022-08-26 14:13:19,332 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:19,332 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:13:19,332 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:19,332 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-fmjyp8no
-2022-08-26 14:13:19,332 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:19,333 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38761
-2022-08-26 14:13:19,333 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38761
-2022-08-26 14:13:19,333 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:13:19,333 - distributed.worker - INFO -          dashboard at:            127.0.0.1:37021
-2022-08-26 14:13:19,333 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33227
-2022-08-26 14:13:19,333 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:19,333 - distributed.worker - INFO -               Threads:                          6
-2022-08-26 14:13:19,333 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:19,333 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-rwwerr22
-2022-08-26 14:13:19,333 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:19,336 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40683', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:19,336 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40683
-2022-08-26 14:13:19,336 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:19,337 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38761', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:19,337 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38761
-2022-08-26 14:13:19,337 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:19,337 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33227
-2022-08-26 14:13:19,337 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:19,338 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33227
-2022-08-26 14:13:19,338 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:19,338 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:19,338 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:19,352 - distributed.scheduler - INFO - Receive client connection: Client-e8b54e8a-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:19,352 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:19,440 - distributed.scheduler - INFO - Remove client Client-e8b54e8a-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:19,440 - distributed.scheduler - INFO - Remove client Client-e8b54e8a-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:19,440 - distributed.scheduler - INFO - Close client connection: Client-e8b54e8a-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:19,441 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:40683
-2022-08-26 14:13:19,441 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:38761
-2022-08-26 14:13:19,442 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:40683', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:19,442 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:40683
-2022-08-26 14:13:19,442 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:38761', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:19,442 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:38761
-2022-08-26 14:13:19,442 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:13:19,443 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-7fb2007f-a36f-44e0-b29e-fe01cc5c64e2 Address tcp://127.0.0.1:40683 Status: Status.closing
-2022-08-26 14:13:19,443 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-dc51ca05-1705-4e0e-9d9d-f341c1f439e9 Address tcp://127.0.0.1:38761 Status: Status.closing
-2022-08-26 14:13:19,445 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:13:19,445 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_who_has_consistent_remove_replicas 2022-08-26 14:13:19,680 - distributed.scheduler - INFO - State start
-2022-08-26 14:13:19,682 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:13:19,682 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:44261
-2022-08-26 14:13:19,682 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:37785
-2022-08-26 14:13:19,689 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38763
-2022-08-26 14:13:19,689 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38763
-2022-08-26 14:13:19,689 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:13:19,689 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35675
-2022-08-26 14:13:19,689 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44261
-2022-08-26 14:13:19,689 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:19,689 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:13:19,689 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:19,689 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-3pawaury
-2022-08-26 14:13:19,689 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:19,690 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:46793
-2022-08-26 14:13:19,690 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:46793
-2022-08-26 14:13:19,690 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:13:19,690 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41737
-2022-08-26 14:13:19,690 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44261
-2022-08-26 14:13:19,690 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:19,690 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:13:19,690 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:19,690 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-as7k0t9j
-2022-08-26 14:13:19,691 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:19,691 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36169
-2022-08-26 14:13:19,691 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36169
-2022-08-26 14:13:19,691 - distributed.worker - INFO -           Worker name:                          2
-2022-08-26 14:13:19,691 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33371
-2022-08-26 14:13:19,691 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44261
-2022-08-26 14:13:19,691 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:19,691 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:13:19,692 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:19,692 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ikqogzyt
-2022-08-26 14:13:19,692 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:19,696 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38763', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:19,696 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38763
-2022-08-26 14:13:19,696 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:19,696 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:46793', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:19,697 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:46793
-2022-08-26 14:13:19,697 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:19,697 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36169', name: 2, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:19,697 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36169
-2022-08-26 14:13:19,698 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:19,698 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44261
-2022-08-26 14:13:19,698 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:19,698 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44261
-2022-08-26 14:13:19,698 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:19,698 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44261
-2022-08-26 14:13:19,699 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:19,699 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:19,699 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:19,699 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:19,713 - distributed.scheduler - INFO - Receive client connection: Client-e8ec6ca8-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:19,713 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:19,758 - distributed.scheduler - INFO - Remove client Client-e8ec6ca8-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:19,758 - distributed.scheduler - INFO - Remove client Client-e8ec6ca8-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:19,758 - distributed.scheduler - INFO - Close client connection: Client-e8ec6ca8-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:19,758 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:38763
-2022-08-26 14:13:19,759 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:46793
-2022-08-26 14:13:19,759 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36169
-2022-08-26 14:13:19,760 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:38763', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:19,760 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:38763
-2022-08-26 14:13:19,760 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:46793', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:19,761 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:46793
-2022-08-26 14:13:19,761 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:36169', name: 2, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:19,761 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:36169
-2022-08-26 14:13:19,761 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:13:19,761 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-13569ff0-bd12-4148-8ac0-47ee9d259ecc Address tcp://127.0.0.1:38763 Status: Status.closing
-2022-08-26 14:13:19,761 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-9f11dc85-d065-4503-b1b3-44f1720fc0c1 Address tcp://127.0.0.1:46793 Status: Status.closing
-2022-08-26 14:13:19,761 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-65f870d4-eb30-4c81-8cf7-f4da3b81009c Address tcp://127.0.0.1:36169 Status: Status.closing
-2022-08-26 14:13:19,763 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:13:19,763 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_acquire_replicas_with_no_priority 2022-08-26 14:13:19,999 - distributed.scheduler - INFO - State start
-2022-08-26 14:13:20,001 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:13:20,001 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:43711
-2022-08-26 14:13:20,001 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:44983
-2022-08-26 14:13:20,006 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45599
-2022-08-26 14:13:20,006 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45599
-2022-08-26 14:13:20,006 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:13:20,006 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35607
-2022-08-26 14:13:20,006 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43711
-2022-08-26 14:13:20,006 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:20,006 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:13:20,006 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:20,006 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-aew_vc05
-2022-08-26 14:13:20,006 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:20,007 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38315
-2022-08-26 14:13:20,007 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38315
-2022-08-26 14:13:20,007 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:13:20,007 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34445
-2022-08-26 14:13:20,007 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43711
-2022-08-26 14:13:20,007 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:20,007 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:13:20,007 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:20,007 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-x5utlo86
-2022-08-26 14:13:20,007 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:20,010 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45599', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:20,010 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45599
-2022-08-26 14:13:20,010 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:20,011 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38315', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:20,011 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38315
-2022-08-26 14:13:20,011 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:20,011 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43711
-2022-08-26 14:13:20,011 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:20,012 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43711
-2022-08-26 14:13:20,012 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:20,012 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:20,012 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:20,026 - distributed.scheduler - INFO - Receive client connection: Client-e91c2ca1-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:20,026 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:20,062 - distributed.scheduler - INFO - Remove client Client-e91c2ca1-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:20,063 - distributed.scheduler - INFO - Remove client Client-e91c2ca1-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:20,063 - distributed.scheduler - INFO - Close client connection: Client-e91c2ca1-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:20,063 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:45599
-2022-08-26 14:13:20,064 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:38315
-2022-08-26 14:13:20,065 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:45599', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:20,065 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:45599
-2022-08-26 14:13:20,065 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:38315', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:20,065 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:38315
-2022-08-26 14:13:20,065 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:13:20,065 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-4d501ab2-a554-4943-b1bd-4f9c1faaec94 Address tcp://127.0.0.1:45599 Status: Status.closing
-2022-08-26 14:13:20,065 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-e6583c8e-78f1-4d32-b76f-6379ce1cd13c Address tcp://127.0.0.1:38315 Status: Status.closing
-2022-08-26 14:13:20,066 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:13:20,067 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_acquire_replicas_large_data 2022-08-26 14:13:20,303 - distributed.scheduler - INFO - State start
-2022-08-26 14:13:20,304 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:13:20,305 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:35405
-2022-08-26 14:13:20,305 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:43625
-2022-08-26 14:13:20,308 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:46237
-2022-08-26 14:13:20,308 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:46237
-2022-08-26 14:13:20,308 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:13:20,308 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39475
-2022-08-26 14:13:20,308 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35405
-2022-08-26 14:13:20,308 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:20,308 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:13:20,308 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:20,308 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-15d28yaz
-2022-08-26 14:13:20,308 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:20,310 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:46237', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:20,310 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:46237
-2022-08-26 14:13:20,310 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:20,311 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35405
-2022-08-26 14:13:20,311 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:20,311 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:20,324 - distributed.scheduler - INFO - Receive client connection: Client-e949bd6e-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:20,325 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:20,354 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42823
-2022-08-26 14:13:20,354 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42823
-2022-08-26 14:13:20,354 - distributed.worker - INFO -          dashboard at:            127.0.0.1:37113
-2022-08-26 14:13:20,354 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35405
-2022-08-26 14:13:20,354 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:20,354 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:13:20,354 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:20,354 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-4ljrq8c9
-2022-08-26 14:13:20,354 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:20,356 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42823', status: init, memory: 0, processing: 0>
-2022-08-26 14:13:20,357 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42823
-2022-08-26 14:13:20,357 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:20,357 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35405
-2022-08-26 14:13:20,357 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:20,357 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:20,369 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42823
-2022-08-26 14:13:20,370 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:42823', status: closing, memory: 10, processing: 0>
-2022-08-26 14:13:20,370 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:42823
-2022-08-26 14:13:20,370 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: BlockedGatherDep-7093a404-6626-441d-b971-a0045e06b2df Address tcp://127.0.0.1:42823 Status: Status.closing
-2022-08-26 14:13:20,382 - distributed.scheduler - INFO - Remove client Client-e949bd6e-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:20,382 - distributed.scheduler - INFO - Remove client Client-e949bd6e-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:20,382 - distributed.scheduler - INFO - Close client connection: Client-e949bd6e-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:20,382 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:46237
-2022-08-26 14:13:20,383 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:46237', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:20,383 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:46237
-2022-08-26 14:13:20,383 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:13:20,383 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-2b887174-698d-46bf-9344-cb303f16ea39 Address tcp://127.0.0.1:46237 Status: Status.closing
-2022-08-26 14:13:20,384 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:13:20,384 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_missing_released_zombie_tasks 2022-08-26 14:13:20,619 - distributed.scheduler - INFO - State start
-2022-08-26 14:13:20,621 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:13:20,621 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:43895
-2022-08-26 14:13:20,621 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:40037
-2022-08-26 14:13:20,625 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45105
-2022-08-26 14:13:20,625 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45105
-2022-08-26 14:13:20,625 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:13:20,626 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46045
-2022-08-26 14:13:20,626 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43895
-2022-08-26 14:13:20,626 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:20,626 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:13:20,626 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:20,626 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-e_xs71ut
-2022-08-26 14:13:20,626 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:20,626 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45201
-2022-08-26 14:13:20,626 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45201
-2022-08-26 14:13:20,626 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:13:20,626 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39333
-2022-08-26 14:13:20,627 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43895
-2022-08-26 14:13:20,627 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:20,627 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:13:20,627 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:20,627 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-xzx4v6b5
-2022-08-26 14:13:20,627 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:20,629 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45105', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:20,630 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45105
-2022-08-26 14:13:20,630 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:20,630 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45201', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:20,630 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45201
-2022-08-26 14:13:20,631 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:20,631 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43895
-2022-08-26 14:13:20,631 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:20,631 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43895
-2022-08-26 14:13:20,631 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:20,631 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:20,631 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:20,645 - distributed.scheduler - INFO - Receive client connection: Client-e97aacfe-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:20,645 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:20,668 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:45105
-2022-08-26 14:13:20,669 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:45105', name: 0, status: closing, memory: 1, processing: 0>
-2022-08-26 14:13:20,669 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:45105
-2022-08-26 14:13:20,669 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-b66e4358-3066-44f3-b34d-98ec6c4ca101 Address tcp://127.0.0.1:45105 Status: Status.closing
-2022-08-26 14:13:20,681 - distributed.scheduler - INFO - Remove client Client-e97aacfe-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:20,682 - distributed.scheduler - INFO - Remove client Client-e97aacfe-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:20,682 - distributed.scheduler - INFO - Close client connection: Client-e97aacfe-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:20,682 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:45201
-2022-08-26 14:13:20,683 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-d7bbc92e-cde9-4010-af4d-c61241ca416d Address tcp://127.0.0.1:45201 Status: Status.closing
-2022-08-26 14:13:20,683 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:45201', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:20,683 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:45201
-2022-08-26 14:13:20,684 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:13:20,684 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:13:20,684 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_missing_released_zombie_tasks_2 2022-08-26 14:13:20,919 - distributed.scheduler - INFO - State start
-2022-08-26 14:13:20,920 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:13:20,920 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:45097
-2022-08-26 14:13:20,921 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:33825
-2022-08-26 14:13:20,923 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36183
-2022-08-26 14:13:20,923 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36183
-2022-08-26 14:13:20,923 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:13:20,923 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40997
-2022-08-26 14:13:20,924 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:45097
-2022-08-26 14:13:20,924 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:20,924 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:13:20,924 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:20,924 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-65cdfqk_
-2022-08-26 14:13:20,924 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:20,926 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36183', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:20,926 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36183
-2022-08-26 14:13:20,926 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:20,926 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:45097
-2022-08-26 14:13:20,926 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:20,926 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:20,940 - distributed.scheduler - INFO - Receive client connection: Client-e9a7a579-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:20,940 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:20,943 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:41433
-2022-08-26 14:13:20,943 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:41433
-2022-08-26 14:13:20,943 - distributed.worker - INFO -          dashboard at:            127.0.0.1:32977
-2022-08-26 14:13:20,943 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:45097
-2022-08-26 14:13:20,943 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:20,943 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:13:20,943 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:20,943 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-jff27av7
-2022-08-26 14:13:20,944 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:20,945 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:41433', status: init, memory: 0, processing: 0>
-2022-08-26 14:13:20,946 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:41433
-2022-08-26 14:13:20,946 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:20,946 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:45097
-2022-08-26 14:13:20,946 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:20,948 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:20,955 - distributed.worker - ERROR - Worker stream died during communication: tcp://127.0.0.1:41433
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 225, in read
-    frames_nbytes = await stream.read_bytes(fmt_size)
-tornado.iostream.StreamClosedError: Stream is closed
-
-The above exception was the direct cause of the following exception:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 1992, in gather_dep
-    response = await get_data_from_worker(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 2731, in get_data_from_worker
-    return await retry_operation(_get_data, operation="get_data_from_worker")
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/utils_comm.py", line 383, in retry_operation
-    return await retry(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/utils_comm.py", line 368, in retry
-    return await coro()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 2711, in _get_data
-    response = await send_recv(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 919, in send_recv
-    response = await comm.read(deserializers=deserializers)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 241, in read
-    convert_stream_closed_error(self, e)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 144, in convert_stream_closed_error
-    raise CommClosedError(f"in {obj}: {exc}") from exc
-distributed.comm.core.CommClosedError: in <TCP (closed) Ephemeral Worker->Worker for gather local=tcp://127.0.0.1:41834 remote=tcp://127.0.0.1:41433>: Stream is closed
-2022-08-26 14:13:20,966 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:41433
-2022-08-26 14:13:20,967 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:41433', status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:20,967 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:41433
-2022-08-26 14:13:20,967 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: BrokenWorker-81d0ec4d-cc9f-44c7-8028-f87aa4525759 Address tcp://127.0.0.1:41433 Status: Status.closing
-2022-08-26 14:13:20,972 - distributed.scheduler - INFO - Remove client Client-e9a7a579-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:20,972 - distributed.scheduler - INFO - Remove client Client-e9a7a579-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:20,972 - distributed.scheduler - INFO - Close client connection: Client-e9a7a579-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:20,973 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36183
-2022-08-26 14:13:20,973 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:36183', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:20,974 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:36183
-2022-08-26 14:13:20,974 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:13:20,974 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-d896a4c2-691a-466b-a5c2-1f5ababcfd18 Address tcp://127.0.0.1:36183 Status: Status.closing
-2022-08-26 14:13:20,974 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:13:20,974 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_worker_status_sync 2022-08-26 14:13:21,209 - distributed.scheduler - INFO - State start
-2022-08-26 14:13:21,211 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:13:21,211 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:38047
-2022-08-26 14:13:21,211 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:37543
-2022-08-26 14:13:21,214 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36003
-2022-08-26 14:13:21,214 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36003
-2022-08-26 14:13:21,214 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:13:21,214 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33565
-2022-08-26 14:13:21,214 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:38047
-2022-08-26 14:13:21,214 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:21,214 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:13:21,214 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:21,214 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-_4dmr9x0
-2022-08-26 14:13:21,214 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:21,216 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36003', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:21,216 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36003
-2022-08-26 14:13:21,216 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:21,217 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:38047
-2022-08-26 14:13:21,217 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:21,217 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:21,249 - distributed.scheduler - INFO - Retiring worker tcp://127.0.0.1:36003
-2022-08-26 14:13:21,249 - distributed.active_memory_manager - INFO - Retiring worker tcp://127.0.0.1:36003; no unique keys need to be moved away.
-2022-08-26 14:13:21,249 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:36003', name: 0, status: closing_gracefully, memory: 0, processing: 0>
-2022-08-26 14:13:21,249 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:36003
-2022-08-26 14:13:21,249 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:13:21,249 - distributed.scheduler - INFO - Retired worker tcp://127.0.0.1:36003
-2022-08-26 14:13:21,250 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36003
-2022-08-26 14:13:21,250 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-90ab4fc6-7986-45b7-a3c5-9c85a1cd45a1 Address tcp://127.0.0.1:36003 Status: Status.closing
-2022-08-26 14:13:21,251 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:13:21,251 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_task_flight_compute_oserror 2022-08-26 14:13:21,484 - distributed.scheduler - INFO - State start
-2022-08-26 14:13:21,486 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:13:21,486 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:34775
-2022-08-26 14:13:21,486 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:44253
-2022-08-26 14:13:21,491 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43661
-2022-08-26 14:13:21,491 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43661
-2022-08-26 14:13:21,491 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:13:21,491 - distributed.worker - INFO -          dashboard at:            127.0.0.1:44617
-2022-08-26 14:13:21,491 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34775
-2022-08-26 14:13:21,491 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:21,491 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:13:21,491 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:21,491 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ocjcgesp
-2022-08-26 14:13:21,491 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:21,492 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35363
-2022-08-26 14:13:21,492 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35363
-2022-08-26 14:13:21,492 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:13:21,492 - distributed.worker - INFO -          dashboard at:            127.0.0.1:45747
-2022-08-26 14:13:21,492 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34775
-2022-08-26 14:13:21,492 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:21,492 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:13:21,492 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:21,492 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-2sw41ybv
-2022-08-26 14:13:21,492 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:21,495 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:43661', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:21,495 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:43661
-2022-08-26 14:13:21,496 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:21,496 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:35363', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:21,496 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:35363
-2022-08-26 14:13:21,496 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:21,496 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34775
-2022-08-26 14:13:21,497 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:21,497 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34775
-2022-08-26 14:13:21,497 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:21,497 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:21,497 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:21,511 - distributed.scheduler - INFO - Receive client connection: Client-e9fec43c-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:21,511 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:21,535 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:43661
-2022-08-26 14:13:21,536 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:43661', name: 0, status: closing, memory: 1, processing: 0>
-2022-08-26 14:13:21,536 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:43661
-2022-08-26 14:13:21,536 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-77c34506-f797-41eb-8016-5e93ff3575c1 Address tcp://127.0.0.1:43661 Status: Status.closing
-2022-08-26 14:13:21,537 - distributed.worker - ERROR - Worker stream died during communication: tcp://127.0.0.1:43661
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 317, in write
-    raise StreamClosedError()
-tornado.iostream.StreamClosedError: Stream is closed
-
-The above exception was the direct cause of the following exception:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 1992, in gather_dep
-    response = await get_data_from_worker(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 2731, in get_data_from_worker
-    return await retry_operation(_get_data, operation="get_data_from_worker")
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/utils_comm.py", line 383, in retry_operation
-    return await retry(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/utils_comm.py", line 368, in retry
-    return await coro()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 2711, in _get_data
-    response = await send_recv(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 917, in send_recv
-    await comm.write(msg, serializers=serializers, on_error="raise")
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/utils_test.py", line 1817, in write
-    return await self.comm.write(msg, serializers=serializers, on_error=on_error)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 328, in write
-    convert_stream_closed_error(self, e)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/comm/tcp.py", line 144, in convert_stream_closed_error
-    raise CommClosedError(f"in {obj}: {exc}") from exc
-distributed.comm.core.CommClosedError: in <TCP (closed) ConnectionPool local=tcp://127.0.0.1:43768 remote=tcp://127.0.0.1:43661>: Stream is closed
-2022-08-26 14:13:21,559 - distributed.scheduler - INFO - Remove client Client-e9fec43c-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:21,559 - distributed.scheduler - INFO - Remove client Client-e9fec43c-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:21,559 - distributed.scheduler - INFO - Close client connection: Client-e9fec43c-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:21,560 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:35363
-2022-08-26 14:13:21,560 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:35363', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:21,560 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:35363
-2022-08-26 14:13:21,560 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:13:21,561 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-94d26da9-d089-40d3-9779-881b1c334a46 Address tcp://127.0.0.1:35363 Status: Status.closing
-2022-08-26 14:13:21,561 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:13:21,561 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_gather_dep_cancelled_rescheduled 2022-08-26 14:13:21,795 - distributed.scheduler - INFO - State start
-2022-08-26 14:13:21,797 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:13:21,797 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:46855
-2022-08-26 14:13:21,797 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:46483
-2022-08-26 14:13:21,801 - distributed.scheduler - INFO - Receive client connection: Client-ea2af755-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:21,801 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:21,804 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38443
-2022-08-26 14:13:21,804 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38443
-2022-08-26 14:13:21,804 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34197
-2022-08-26 14:13:21,804 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:46855
-2022-08-26 14:13:21,804 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:21,804 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:13:21,804 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:21,804 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-9_51u05_
-2022-08-26 14:13:21,804 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:21,806 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38443', status: init, memory: 0, processing: 0>
-2022-08-26 14:13:21,807 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38443
-2022-08-26 14:13:21,807 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:21,807 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:46855
-2022-08-26 14:13:21,807 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:21,810 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:46085
-2022-08-26 14:13:21,810 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:46085
-2022-08-26 14:13:21,810 - distributed.worker - INFO -          dashboard at:            127.0.0.1:32905
-2022-08-26 14:13:21,810 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:46855
-2022-08-26 14:13:21,810 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:21,810 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:13:21,810 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:21,810 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-7g_btrop
-2022-08-26 14:13:21,810 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:21,810 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:21,812 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:46085', status: init, memory: 0, processing: 0>
-2022-08-26 14:13:21,812 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:46085
-2022-08-26 14:13:21,813 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:21,813 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:46855
-2022-08-26 14:13:21,813 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:21,816 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:21,856 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:46085
-2022-08-26 14:13:21,856 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: BlockedGatherDep-e65b0138-debc-4d1f-ab79-1147f7e58867 Address tcp://127.0.0.1:46085 Status: Status.closing
-2022-08-26 14:13:21,857 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:46085', status: closing, memory: 4, processing: 0>
-2022-08-26 14:13:21,857 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:46085
-2022-08-26 14:13:21,858 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:38443
-2022-08-26 14:13:21,858 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:38443', status: closing, memory: 2, processing: 0>
-2022-08-26 14:13:21,859 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:38443
-2022-08-26 14:13:21,859 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:13:21,859 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: BlockedGetData-dd910888-6482-4413-8c54-5b5ab65435a6 Address tcp://127.0.0.1:38443 Status: Status.closing
-2022-08-26 14:13:21,871 - distributed.scheduler - INFO - Remove client Client-ea2af755-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:21,871 - distributed.scheduler - INFO - Remove client Client-ea2af755-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:21,871 - distributed.scheduler - INFO - Close client connection: Client-ea2af755-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:21,871 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:13:21,872 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_gather_dep_do_not_handle_response_of_not_requested_tasks 2022-08-26 14:13:22,107 - distributed.scheduler - INFO - State start
-2022-08-26 14:13:22,108 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:13:22,108 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:34179
-2022-08-26 14:13:22,108 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:38299
-2022-08-26 14:13:22,111 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:39451
-2022-08-26 14:13:22,111 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:39451
-2022-08-26 14:13:22,111 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:13:22,111 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39027
-2022-08-26 14:13:22,111 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34179
-2022-08-26 14:13:22,112 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:22,112 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:13:22,112 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:22,112 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-_eaiu4n4
-2022-08-26 14:13:22,112 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:22,114 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39451', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:22,114 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39451
-2022-08-26 14:13:22,114 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:22,114 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34179
-2022-08-26 14:13:22,114 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:22,114 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:22,128 - distributed.scheduler - INFO - Receive client connection: Client-ea5ceefe-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:22,128 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:22,131 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37207
-2022-08-26 14:13:22,131 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37207
-2022-08-26 14:13:22,131 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39641
-2022-08-26 14:13:22,131 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34179
-2022-08-26 14:13:22,131 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:22,131 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:13:22,131 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:22,132 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-gwuokkgc
-2022-08-26 14:13:22,132 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:22,134 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37207', status: init, memory: 0, processing: 0>
-2022-08-26 14:13:22,134 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37207
-2022-08-26 14:13:22,134 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:22,134 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34179
-2022-08-26 14:13:22,134 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:22,137 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:22,172 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:37207
-2022-08-26 14:13:22,173 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:37207', status: closing, memory: 2, processing: 0>
-2022-08-26 14:13:22,173 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:37207
-2022-08-26 14:13:22,173 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: BlockedGatherDep-f2b11e79-2232-4994-8d2b-66d5b1c54827 Address tcp://127.0.0.1:37207 Status: Status.closing
-2022-08-26 14:13:22,185 - distributed.scheduler - INFO - Remove client Client-ea5ceefe-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:22,185 - distributed.scheduler - INFO - Remove client Client-ea5ceefe-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:22,185 - distributed.scheduler - INFO - Close client connection: Client-ea5ceefe-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:22,186 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:39451
-2022-08-26 14:13:22,187 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:39451', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:22,187 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:39451
-2022-08-26 14:13:22,187 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:13:22,187 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-00b36448-b9f2-4740-b91e-413c22b85122 Address tcp://127.0.0.1:39451 Status: Status.closing
-2022-08-26 14:13:22,187 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:13:22,188 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_gather_dep_no_longer_in_flight_tasks 2022-08-26 14:13:22,422 - distributed.scheduler - INFO - State start
-2022-08-26 14:13:22,424 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:13:22,424 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:43883
-2022-08-26 14:13:22,424 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:41355
-2022-08-26 14:13:22,427 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35961
-2022-08-26 14:13:22,427 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35961
-2022-08-26 14:13:22,427 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:13:22,427 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41827
-2022-08-26 14:13:22,427 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43883
-2022-08-26 14:13:22,427 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:22,427 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:13:22,427 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:22,427 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-p288pcvk
-2022-08-26 14:13:22,427 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:22,429 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:35961', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:22,429 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:35961
-2022-08-26 14:13:22,429 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:22,430 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43883
-2022-08-26 14:13:22,430 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:22,430 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:22,443 - distributed.scheduler - INFO - Receive client connection: Client-ea8d0d9c-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:22,444 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:22,446 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40781
-2022-08-26 14:13:22,447 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40781
-2022-08-26 14:13:22,447 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39687
-2022-08-26 14:13:22,447 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43883
-2022-08-26 14:13:22,447 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:22,447 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:13:22,447 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:22,447 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-m5ry3jj3
-2022-08-26 14:13:22,447 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:22,449 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40781', status: init, memory: 0, processing: 0>
-2022-08-26 14:13:22,449 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40781
-2022-08-26 14:13:22,449 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:22,449 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43883
-2022-08-26 14:13:22,449 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:22,452 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:22,473 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:40781
-2022-08-26 14:13:22,474 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: BlockedGatherDep-0c076b88-998e-49f6-b722-15c44cdb4171 Address tcp://127.0.0.1:40781 Status: Status.closing
-2022-08-26 14:13:22,475 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:40781', status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:22,475 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:40781
-2022-08-26 14:13:22,486 - distributed.scheduler - INFO - Remove client Client-ea8d0d9c-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:22,486 - distributed.scheduler - INFO - Remove client Client-ea8d0d9c-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:22,486 - distributed.scheduler - INFO - Close client connection: Client-ea8d0d9c-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:22,486 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:35961
-2022-08-26 14:13:22,487 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:35961', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:22,487 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:35961
-2022-08-26 14:13:22,487 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:13:22,487 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-84da1bef-a9e7-43e6-b96c-496c8572488c Address tcp://127.0.0.1:35961 Status: Status.closing
-2022-08-26 14:13:22,488 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:13:22,488 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_Worker__to_dict 2022-08-26 14:13:22,722 - distributed.scheduler - INFO - State start
-2022-08-26 14:13:22,724 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:13:22,724 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:37429
-2022-08-26 14:13:22,724 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:37639
-2022-08-26 14:13:22,727 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40161
-2022-08-26 14:13:22,727 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40161
-2022-08-26 14:13:22,727 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:13:22,727 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33945
-2022-08-26 14:13:22,727 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:37429
-2022-08-26 14:13:22,727 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:22,727 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:13:22,727 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:22,727 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-7nhk6aj_
-2022-08-26 14:13:22,727 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:22,729 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40161', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:22,729 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40161
-2022-08-26 14:13:22,730 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:22,730 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:37429
-2022-08-26 14:13:22,730 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:22,730 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:22,743 - distributed.scheduler - INFO - Receive client connection: Client-eabada4d-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:22,744 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:22,766 - distributed.scheduler - INFO - Remove client Client-eabada4d-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:22,766 - distributed.scheduler - INFO - Remove client Client-eabada4d-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:22,766 - distributed.scheduler - INFO - Close client connection: Client-eabada4d-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:22,767 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:40161
-2022-08-26 14:13:22,768 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-d44176cf-7884-4c1a-a179-7e57daf86259 Address tcp://127.0.0.1:40161 Status: Status.closing
-2022-08-26 14:13:22,768 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:40161', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:22,768 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:40161
-2022-08-26 14:13:22,768 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:13:22,769 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:13:22,769 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_extension_methods 2022-08-26 14:13:23,003 - distributed.scheduler - INFO - State start
-2022-08-26 14:13:23,005 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:13:23,005 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:34631
-2022-08-26 14:13:23,005 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:46037
-2022-08-26 14:13:23,008 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45275
-2022-08-26 14:13:23,008 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45275
-2022-08-26 14:13:23,008 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40603
-2022-08-26 14:13:23,008 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34631
-2022-08-26 14:13:23,008 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:23,008 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:13:23,008 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:23,008 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-at_y3ab8
-2022-08-26 14:13:23,008 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:23,010 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45275', status: init, memory: 0, processing: 0>
-2022-08-26 14:13:23,010 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45275
-2022-08-26 14:13:23,010 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:23,011 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34631
-2022-08-26 14:13:23,011 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:23,011 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:23,013 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:45275
-2022-08-26 14:13:23,013 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:45275', status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:23,013 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:45275
-2022-08-26 14:13:23,013 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:13:23,014 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-b516afdd-528f-4e77-9109-97995c0e2725 Address tcp://127.0.0.1:45275 Status: Status.closing
-2022-08-26 14:13:23,014 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:13:23,015 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_benchmark_hardware 2022-08-26 14:13:23,248 - distributed.scheduler - INFO - State start
-2022-08-26 14:13:23,250 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:13:23,250 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:38293
-2022-08-26 14:13:23,250 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:46287
-2022-08-26 14:13:23,255 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33117
-2022-08-26 14:13:23,255 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33117
-2022-08-26 14:13:23,255 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:13:23,255 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40583
-2022-08-26 14:13:23,255 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:38293
-2022-08-26 14:13:23,255 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:23,255 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:13:23,255 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:23,255 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-umrciuxr
-2022-08-26 14:13:23,255 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:23,256 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45789
-2022-08-26 14:13:23,256 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45789
-2022-08-26 14:13:23,256 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:13:23,256 - distributed.worker - INFO -          dashboard at:            127.0.0.1:37313
-2022-08-26 14:13:23,256 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:38293
-2022-08-26 14:13:23,256 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:23,256 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:13:23,256 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:23,256 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-a1h269vh
-2022-08-26 14:13:23,256 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:23,259 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33117', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:23,259 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33117
-2022-08-26 14:13:23,259 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:23,260 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45789', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:23,260 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45789
-2022-08-26 14:13:23,260 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:23,260 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:38293
-2022-08-26 14:13:23,260 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:23,261 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:38293
-2022-08-26 14:13:23,261 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:23,261 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:23,261 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:23,280 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:33117
-2022-08-26 14:13:23,281 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:45789
-2022-08-26 14:13:23,281 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:33117', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:23,282 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:33117
-2022-08-26 14:13:23,282 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:45789', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:23,282 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:45789
-2022-08-26 14:13:23,282 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:13:23,282 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-56f63e77-4694-4c02-96f0-7e0da3900e6d Address tcp://127.0.0.1:33117 Status: Status.closing
-2022-08-26 14:13:23,282 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-85107db0-f21d-45c4-b387-cee1466ab7d1 Address tcp://127.0.0.1:45789 Status: Status.closing
-2022-08-26 14:13:23,283 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:13:23,283 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_tick_interval 2022-08-26 14:13:23,517 - distributed.scheduler - INFO - State start
-2022-08-26 14:13:23,518 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:13:23,519 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:46873
-2022-08-26 14:13:23,519 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:36405
-2022-08-26 14:13:23,523 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45709
-2022-08-26 14:13:23,523 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45709
-2022-08-26 14:13:23,523 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:13:23,523 - distributed.worker - INFO -          dashboard at:            127.0.0.1:44667
-2022-08-26 14:13:23,524 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:46873
-2022-08-26 14:13:23,524 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:23,524 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:13:23,524 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:23,524 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-cxr8902g
-2022-08-26 14:13:23,524 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:23,524 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:46697
-2022-08-26 14:13:23,524 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:46697
-2022-08-26 14:13:23,524 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:13:23,524 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34659
-2022-08-26 14:13:23,524 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:46873
-2022-08-26 14:13:23,525 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:23,525 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:13:23,525 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:23,525 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-1_ubbfm5
-2022-08-26 14:13:23,525 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:23,528 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45709', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:23,528 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45709
-2022-08-26 14:13:23,528 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:23,528 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:46697', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:23,528 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:46697
-2022-08-26 14:13:23,529 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:23,529 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:46873
-2022-08-26 14:13:23,529 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:23,529 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:46873
-2022-08-26 14:13:23,529 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:23,529 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:23,529 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:23,543 - distributed.scheduler - INFO - Receive client connection: Client-eb34ee25-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:23,544 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:28,823 - distributed.scheduler - INFO - Remove client Client-eb34ee25-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:28,823 - distributed.scheduler - INFO - Remove client Client-eb34ee25-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:28,823 - distributed.scheduler - INFO - Close client connection: Client-eb34ee25-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:28,823 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:45709
-2022-08-26 14:13:28,824 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:46697
-2022-08-26 14:13:28,825 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:45709', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:28,825 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:45709
-2022-08-26 14:13:28,825 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:46697', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:28,825 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:46697
-2022-08-26 14:13:28,825 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:13:28,826 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-d732baf6-180e-4458-beda-3081ea1e602b Address tcp://127.0.0.1:45709 Status: Status.closing
-2022-08-26 14:13:28,826 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-e32b1213-5fe3-493b-8621-4730e2391e5a Address tcp://127.0.0.1:46697 Status: Status.closing
-2022-08-26 14:13:28,827 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:13:28,827 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_broken_comm SKIPPED (need --r...)
-distributed/tests/test_worker.py::test_do_not_block_event_loop_during_shutdown 2022-08-26 14:13:29,061 - distributed.scheduler - INFO - State start
-2022-08-26 14:13:29,063 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:13:29,063 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:46405
-2022-08-26 14:13:29,063 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:42899
-2022-08-26 14:13:29,066 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42727
-2022-08-26 14:13:29,066 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42727
-2022-08-26 14:13:29,066 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36289
-2022-08-26 14:13:29,066 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:46405
-2022-08-26 14:13:29,066 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:29,066 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:13:29,066 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:29,066 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-6eat946z
-2022-08-26 14:13:29,066 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:29,068 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42727', status: init, memory: 0, processing: 0>
-2022-08-26 14:13:29,069 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42727
-2022-08-26 14:13:29,069 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:29,069 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:46405
-2022-08-26 14:13:29,069 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:29,069 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42727
-2022-08-26 14:13:29,070 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:29,070 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-18c69080-c2e5-4460-8ed0-7354439b3a61 Address tcp://127.0.0.1:42727 Status: Status.closing
-2022-08-26 14:13:29,071 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:42727', status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:29,071 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:42727
-2022-08-26 14:13:29,071 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:13:29,170 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:13:29,171 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_reconnect_argument_deprecated 2022-08-26 14:13:29,403 - distributed.scheduler - INFO - State start
-2022-08-26 14:13:29,405 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:13:29,405 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:35297
-2022-08-26 14:13:29,405 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:43871
-2022-08-26 14:13:29,408 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38635
-2022-08-26 14:13:29,408 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38635
-2022-08-26 14:13:29,408 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41667
-2022-08-26 14:13:29,408 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35297
-2022-08-26 14:13:29,408 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:29,408 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:13:29,408 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:29,408 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-49wsemfs
-2022-08-26 14:13:29,408 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:29,410 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38635', status: init, memory: 0, processing: 0>
-2022-08-26 14:13:29,411 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38635
-2022-08-26 14:13:29,411 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:29,411 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35297
-2022-08-26 14:13:29,411 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:29,411 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:38635
-2022-08-26 14:13:29,412 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:29,412 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-80d6d565-0335-4e2a-9f57-b441ab54151f Address tcp://127.0.0.1:38635 Status: Status.closing
-2022-08-26 14:13:29,412 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:38635', status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:29,412 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:38635
-2022-08-26 14:13:29,412 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:13:29,415 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34499
-2022-08-26 14:13:29,415 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34499
-2022-08-26 14:13:29,416 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42807
-2022-08-26 14:13:29,416 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35297
-2022-08-26 14:13:29,416 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:29,416 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:13:29,416 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:29,416 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-uujkxksw
-2022-08-26 14:13:29,416 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:29,418 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:34499', status: init, memory: 0, processing: 0>
-2022-08-26 14:13:29,418 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:34499
-2022-08-26 14:13:29,418 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:29,418 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35297
-2022-08-26 14:13:29,418 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:29,418 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:34499
-2022-08-26 14:13:29,419 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:29,419 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-2a5ae179-1b33-4a58-8d89-dd2f4136b38d Address tcp://127.0.0.1:34499 Status: Status.closing
-2022-08-26 14:13:29,419 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:34499', status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:29,419 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:34499
-2022-08-26 14:13:29,420 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:13:29,420 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:13:29,420 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_worker_running_before_running_plugins 2022-08-26 14:13:29,652 - distributed.scheduler - INFO - State start
-2022-08-26 14:13:29,654 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:13:29,654 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:40733
-2022-08-26 14:13:29,654 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:40579
-2022-08-26 14:13:29,658 - distributed.scheduler - INFO - Receive client connection: Client-eed9d938-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:29,658 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:29,662 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36799
-2022-08-26 14:13:29,662 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36799
-2022-08-26 14:13:29,662 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35031
-2022-08-26 14:13:29,662 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:40733
-2022-08-26 14:13:29,662 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:29,662 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:13:29,662 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:29,662 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-02j39ge9
-2022-08-26 14:13:29,662 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:29,664 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36799', status: init, memory: 0, processing: 0>
-2022-08-26 14:13:29,664 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36799
-2022-08-26 14:13:29,664 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:29,665 - distributed.worker - INFO - Starting Worker plugin init_worker_new_thread
-2022-08-26 14:13:29,665 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:40733
-2022-08-26 14:13:29,665 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:29,666 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:29,673 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36799
-2022-08-26 14:13:29,674 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:36799', status: closing, memory: 1, processing: 0>
-2022-08-26 14:13:29,674 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:36799
-2022-08-26 14:13:29,674 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:13:29,675 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-c783fef7-81bc-426d-bd82-4f1aeee97bfd Address tcp://127.0.0.1:36799 Status: Status.closing
-2022-08-26 14:13:29,680 - distributed.scheduler - INFO - Remove client Client-eed9d938-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:29,680 - distributed.scheduler - INFO - Remove client Client-eed9d938-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:29,681 - distributed.scheduler - INFO - Close client connection: Client-eed9d938-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:29,681 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:13:29,681 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker.py::test_execute_preamble_abort_retirement 2022-08-26 14:13:29,914 - distributed.scheduler - INFO - State start
-2022-08-26 14:13:29,915 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:13:29,916 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:34987
-2022-08-26 14:13:29,916 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:45835
-2022-08-26 14:13:29,919 - distributed.scheduler - INFO - Receive client connection: Client-ef01b8ab-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:29,919 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:29,922 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38005
-2022-08-26 14:13:29,922 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38005
-2022-08-26 14:13:29,922 - distributed.worker - INFO -          dashboard at:            127.0.0.1:37957
-2022-08-26 14:13:29,922 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34987
-2022-08-26 14:13:29,922 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:29,922 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:13:29,922 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:29,923 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-3nkmkqnw
-2022-08-26 14:13:29,923 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:29,925 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38005', status: init, memory: 0, processing: 0>
-2022-08-26 14:13:29,925 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38005
-2022-08-26 14:13:29,925 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:29,925 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34987
-2022-08-26 14:13:29,925 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:29,926 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:30,035 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:46317
-2022-08-26 14:13:30,035 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:46317
-2022-08-26 14:13:30,035 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41489
-2022-08-26 14:13:30,035 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34987
-2022-08-26 14:13:30,035 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:30,035 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:13:30,035 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:30,035 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-bz6aag1w
-2022-08-26 14:13:30,035 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:30,037 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:46317', status: init, memory: 0, processing: 0>
-2022-08-26 14:13:30,037 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:46317
-2022-08-26 14:13:30,037 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:30,038 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34987
-2022-08-26 14:13:30,038 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:30,038 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:30,140 - distributed.scheduler - INFO - Retiring worker tcp://127.0.0.1:38005
-2022-08-26 14:13:30,140 - distributed.active_memory_manager - INFO - Retiring worker tcp://127.0.0.1:38005; 1 keys are being moved away.
-2022-08-26 14:13:30,151 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:46317
-2022-08-26 14:13:30,152 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: BlockedGatherDep-e6875ca8-3e08-47be-b049-fddbdd359e97 Address tcp://127.0.0.1:46317 Status: Status.closing
-2022-08-26 14:13:30,152 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:46317', status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:30,152 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:46317
-2022-08-26 14:13:30,166 - distributed.active_memory_manager - WARNING - Tried retiring worker tcp://127.0.0.1:38005, but 1 tasks could not be moved as there are no suitable workers to receive them. The worker will not be retired.
-2022-08-26 14:13:30,177 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:38005
-2022-08-26 14:13:30,178 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:38005', status: closing, memory: 2, processing: 0>
-2022-08-26 14:13:30,178 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:38005
-2022-08-26 14:13:30,178 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:13:30,178 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: BlockedExecute-a44c32bb-66e3-406a-8b05-6c5b541f347f Address tcp://127.0.0.1:38005 Status: Status.closing
-2022-08-26 14:13:30,191 - distributed.scheduler - INFO - Remove client Client-ef01b8ab-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:30,191 - distributed.scheduler - INFO - Remove client Client-ef01b8ab-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:30,192 - distributed.scheduler - INFO - Close client connection: Client-ef01b8ab-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:30,192 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:13:30,192 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker_client.py::test_submit_from_worker 2022-08-26 14:13:30,427 - distributed.scheduler - INFO - State start
-2022-08-26 14:13:30,429 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:13:30,429 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:33391
-2022-08-26 14:13:30,429 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:33743
-2022-08-26 14:13:30,434 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36271
-2022-08-26 14:13:30,434 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36271
-2022-08-26 14:13:30,434 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:13:30,434 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46727
-2022-08-26 14:13:30,434 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33391
-2022-08-26 14:13:30,434 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:30,434 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:13:30,434 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:30,434 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-0k7e24xi
-2022-08-26 14:13:30,435 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:30,435 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38945
-2022-08-26 14:13:30,435 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38945
-2022-08-26 14:13:30,435 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:13:30,435 - distributed.worker - INFO -          dashboard at:            127.0.0.1:42649
-2022-08-26 14:13:30,435 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33391
-2022-08-26 14:13:30,435 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:30,435 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:13:30,435 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:30,435 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-h86mh8vs
-2022-08-26 14:13:30,436 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:30,438 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36271', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:30,439 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36271
-2022-08-26 14:13:30,439 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:30,439 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38945', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:30,439 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38945
-2022-08-26 14:13:30,439 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:30,440 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33391
-2022-08-26 14:13:30,440 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:30,440 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33391
-2022-08-26 14:13:30,440 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:30,440 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:30,440 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:30,454 - distributed.scheduler - INFO - Receive client connection: Client-ef536603-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:30,454 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:30,502 - distributed.scheduler - INFO - Remove client Client-ef536603-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:30,502 - distributed.scheduler - INFO - Remove client Client-ef536603-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:30,502 - distributed.scheduler - INFO - Close client connection: Client-ef536603-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:30,504 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36271
-2022-08-26 14:13:30,504 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:38945
-2022-08-26 14:13:30,505 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-3f0db65c-9a41-4423-a3dc-69e0102d5ed2 Address tcp://127.0.0.1:36271 Status: Status.closing
-2022-08-26 14:13:30,505 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-109f98eb-75c8-410f-9921-fea63c614464 Address tcp://127.0.0.1:38945 Status: Status.closing
-2022-08-26 14:13:30,506 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:36271', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:30,506 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:36271
-2022-08-26 14:13:30,506 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:38945', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:30,506 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:38945
-2022-08-26 14:13:30,506 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:13:30,507 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:13:30,507 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker_client.py::test_scatter_from_worker 2022-08-26 14:13:30,742 - distributed.scheduler - INFO - State start
-2022-08-26 14:13:30,744 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:13:30,744 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:38963
-2022-08-26 14:13:30,744 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:42211
-2022-08-26 14:13:30,748 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:41909
-2022-08-26 14:13:30,748 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:41909
-2022-08-26 14:13:30,748 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:13:30,749 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34089
-2022-08-26 14:13:30,749 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:38963
-2022-08-26 14:13:30,749 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:30,749 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:13:30,749 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:30,749 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-recds0hm
-2022-08-26 14:13:30,749 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:30,749 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38183
-2022-08-26 14:13:30,749 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38183
-2022-08-26 14:13:30,749 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:13:30,749 - distributed.worker - INFO -          dashboard at:            127.0.0.1:38221
-2022-08-26 14:13:30,750 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:38963
-2022-08-26 14:13:30,750 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:30,750 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:13:30,750 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:30,750 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-fwiabre8
-2022-08-26 14:13:30,750 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:30,753 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:41909', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:30,753 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:41909
-2022-08-26 14:13:30,753 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:30,753 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38183', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:30,754 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38183
-2022-08-26 14:13:30,754 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:30,754 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:38963
-2022-08-26 14:13:30,754 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:30,754 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:38963
-2022-08-26 14:13:30,754 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:30,755 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:30,755 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:30,768 - distributed.scheduler - INFO - Receive client connection: Client-ef8358ba-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:30,769 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:30,812 - distributed.scheduler - INFO - Remove client Client-ef8358ba-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:30,813 - distributed.scheduler - INFO - Remove client Client-ef8358ba-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:30,813 - distributed.scheduler - INFO - Close client connection: Client-ef8358ba-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:30,814 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:41909
-2022-08-26 14:13:30,815 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:38183
-2022-08-26 14:13:30,815 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:41909', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:30,815 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:41909
-2022-08-26 14:13:30,816 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-2e09e2e8-9994-4d58-9dc6-f89df4cbdb0a Address tcp://127.0.0.1:41909 Status: Status.closing
-2022-08-26 14:13:30,816 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-2fe6424e-c9d5-4f92-b730-647f5c79190c Address tcp://127.0.0.1:38183 Status: Status.closing
-2022-08-26 14:13:30,816 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:38183', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:30,817 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:38183
-2022-08-26 14:13:30,817 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:13:30,817 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:13:30,817 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker_client.py::test_scatter_singleton 2022-08-26 14:13:31,053 - distributed.scheduler - INFO - State start
-2022-08-26 14:13:31,055 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:13:31,055 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:34233
-2022-08-26 14:13:31,055 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:45691
-2022-08-26 14:13:31,059 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:46567
-2022-08-26 14:13:31,059 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:46567
-2022-08-26 14:13:31,060 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:13:31,060 - distributed.worker - INFO -          dashboard at:            127.0.0.1:37061
-2022-08-26 14:13:31,060 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34233
-2022-08-26 14:13:31,060 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:31,060 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:13:31,060 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:31,060 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-xwf424kd
-2022-08-26 14:13:31,060 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:31,060 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:43325
-2022-08-26 14:13:31,060 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:43325
-2022-08-26 14:13:31,060 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:13:31,061 - distributed.worker - INFO -          dashboard at:            127.0.0.1:37691
-2022-08-26 14:13:31,061 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34233
-2022-08-26 14:13:31,061 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:31,061 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:13:31,061 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:31,061 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-fcd_ln0n
-2022-08-26 14:13:31,061 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:31,064 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:46567', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:31,064 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:46567
-2022-08-26 14:13:31,064 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:31,065 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:43325', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:31,065 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:43325
-2022-08-26 14:13:31,065 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:31,065 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34233
-2022-08-26 14:13:31,065 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:31,065 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34233
-2022-08-26 14:13:31,066 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:31,066 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:31,066 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:31,080 - distributed.scheduler - INFO - Receive client connection: Client-efb2d546-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:31,080 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:31,102 - distributed.scheduler - INFO - Remove client Client-efb2d546-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:31,102 - distributed.scheduler - INFO - Remove client Client-efb2d546-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:31,103 - distributed.scheduler - INFO - Close client connection: Client-efb2d546-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:31,104 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:46567
-2022-08-26 14:13:31,104 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:43325
-2022-08-26 14:13:31,105 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:46567', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:31,105 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:46567
-2022-08-26 14:13:31,105 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-3f74f30f-9ae8-4f56-a889-d0b6d769aabc Address tcp://127.0.0.1:46567 Status: Status.closing
-2022-08-26 14:13:31,106 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-aa7f5df9-f1bd-4a34-a88a-df3a821a3b18 Address tcp://127.0.0.1:43325 Status: Status.closing
-2022-08-26 14:13:31,106 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:43325', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:31,106 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:43325
-2022-08-26 14:13:31,106 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:13:31,107 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:13:31,107 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker_client.py::test_gather_multi_machine 2022-08-26 14:13:31,342 - distributed.scheduler - INFO - State start
-2022-08-26 14:13:31,343 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:13:31,344 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:40827
-2022-08-26 14:13:31,344 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:46159
-2022-08-26 14:13:31,348 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38655
-2022-08-26 14:13:31,348 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38655
-2022-08-26 14:13:31,348 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:13:31,348 - distributed.worker - INFO -          dashboard at:            127.0.0.1:37885
-2022-08-26 14:13:31,348 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:40827
-2022-08-26 14:13:31,349 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:31,349 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:13:31,349 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:31,349 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-91oj9qh3
-2022-08-26 14:13:31,349 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:31,349 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:45281
-2022-08-26 14:13:31,349 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:45281
-2022-08-26 14:13:31,349 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:13:31,349 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33401
-2022-08-26 14:13:31,349 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:40827
-2022-08-26 14:13:31,349 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:31,350 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:13:31,350 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:31,350 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-lrqewnib
-2022-08-26 14:13:31,350 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:31,353 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38655', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:31,353 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38655
-2022-08-26 14:13:31,353 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:31,353 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45281', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:31,354 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45281
-2022-08-26 14:13:31,354 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:31,354 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:40827
-2022-08-26 14:13:31,354 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:31,354 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:40827
-2022-08-26 14:13:31,354 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:31,355 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:31,355 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:31,368 - distributed.scheduler - INFO - Receive client connection: Client-efdee671-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:31,369 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:31,402 - distributed.scheduler - INFO - Remove client Client-efdee671-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:31,402 - distributed.scheduler - INFO - Remove client Client-efdee671-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:31,402 - distributed.scheduler - INFO - Close client connection: Client-efdee671-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:31,404 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:38655
-2022-08-26 14:13:31,404 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:45281
-2022-08-26 14:13:31,405 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-f3b44ad2-7623-40c5-940a-e0853008822f Address tcp://127.0.0.1:38655 Status: Status.closing
-2022-08-26 14:13:31,405 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-7830026b-08fc-46b2-92ea-c8cd0135fc93 Address tcp://127.0.0.1:45281 Status: Status.closing
-2022-08-26 14:13:31,405 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:38655', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:31,406 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:38655
-2022-08-26 14:13:31,406 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:45281', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:31,406 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:45281
-2022-08-26 14:13:31,406 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:13:31,407 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:13:31,407 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker_client.py::test_same_loop 2022-08-26 14:13:31,641 - distributed.scheduler - INFO - State start
-2022-08-26 14:13:31,643 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:13:31,643 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:44365
-2022-08-26 14:13:31,643 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:40035
-2022-08-26 14:13:31,647 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35825
-2022-08-26 14:13:31,647 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35825
-2022-08-26 14:13:31,647 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:13:31,648 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34501
-2022-08-26 14:13:31,648 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44365
-2022-08-26 14:13:31,648 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:31,648 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:13:31,648 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:31,648 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ysnwylpz
-2022-08-26 14:13:31,648 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:31,648 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42111
-2022-08-26 14:13:31,648 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42111
-2022-08-26 14:13:31,649 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:13:31,649 - distributed.worker - INFO -          dashboard at:            127.0.0.1:33845
-2022-08-26 14:13:31,649 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:44365
-2022-08-26 14:13:31,649 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:31,649 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:13:31,649 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:31,649 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-xba_l3m5
-2022-08-26 14:13:31,649 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:31,652 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:35825', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:31,652 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:35825
-2022-08-26 14:13:31,652 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:31,653 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42111', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:31,653 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42111
-2022-08-26 14:13:31,653 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:31,653 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44365
-2022-08-26 14:13:31,653 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:31,654 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:44365
-2022-08-26 14:13:31,654 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:31,654 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:31,654 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:31,668 - distributed.scheduler - INFO - Receive client connection: Client-f00c95f0-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:31,668 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:31,690 - distributed.scheduler - INFO - Remove client Client-f00c95f0-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:31,690 - distributed.scheduler - INFO - Remove client Client-f00c95f0-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:31,691 - distributed.scheduler - INFO - Close client connection: Client-f00c95f0-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:31,691 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:35825
-2022-08-26 14:13:31,692 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42111
-2022-08-26 14:13:31,693 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:42111', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:31,693 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:42111
-2022-08-26 14:13:31,693 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-6ced418a-5025-4c23-97e4-2370be758ba0 Address tcp://127.0.0.1:42111 Status: Status.closing
-2022-08-26 14:13:31,693 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-1a07de8e-c9a1-4aad-9e42-69ad7eed6cf7 Address tcp://127.0.0.1:35825 Status: Status.closing
-2022-08-26 14:13:31,694 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:35825', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:31,694 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:35825
-2022-08-26 14:13:31,694 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:13:31,694 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:13:31,695 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker_client.py::test_sync 2022-08-26 14:13:32,922 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:13:32,924 - distributed.scheduler - INFO - State start
-2022-08-26 14:13:32,928 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:13:32,928 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:41437
-2022-08-26 14:13:32,928 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:13:32,942 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34235
-2022-08-26 14:13:32,943 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34235
-2022-08-26 14:13:32,943 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40917
-2022-08-26 14:13:32,943 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:41437
-2022-08-26 14:13:32,943 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:32,943 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:13:32,943 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:32,943 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-1ear_ppt
-2022-08-26 14:13:32,943 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:32,987 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:35639
-2022-08-26 14:13:32,987 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:35639
-2022-08-26 14:13:32,987 - distributed.worker - INFO -          dashboard at:            127.0.0.1:35649
-2022-08-26 14:13:32,987 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:41437
-2022-08-26 14:13:32,987 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:32,987 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:13:32,987 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:32,987 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-11qb5e70
-2022-08-26 14:13:32,987 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:33,254 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:34235', status: init, memory: 0, processing: 0>
-2022-08-26 14:13:33,546 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:34235
-2022-08-26 14:13:33,546 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:33,546 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:41437
-2022-08-26 14:13:33,546 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:33,547 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:35639', status: init, memory: 0, processing: 0>
-2022-08-26 14:13:33,547 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:33,548 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:35639
-2022-08-26 14:13:33,548 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:33,548 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:41437
-2022-08-26 14:13:33,548 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:33,549 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:33,554 - distributed.scheduler - INFO - Receive client connection: Client-f12c5e10-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:33,554 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:33,577 - distributed.scheduler - INFO - Receive client connection: Client-worker-f12fa63f-2583-11ed-8beb-00d861bc4509
-2022-08-26 14:13:33,577 - distributed.core - INFO - Starting established connection
-PASSED2022-08-26 14:13:33,941 - distributed.scheduler - INFO - Remove client Client-f12c5e10-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:33,941 - distributed.scheduler - INFO - Remove client Client-f12c5e10-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:33,942 - distributed.scheduler - INFO - Close client connection: Client-f12c5e10-2583-11ed-a99d-00d861bc4509
-
-distributed/tests/test_worker_client.py::test_async 2022-08-26 14:13:33,955 - distributed.scheduler - INFO - State start
-2022-08-26 14:13:33,957 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:13:33,957 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:43451
-2022-08-26 14:13:33,957 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:38733
-2022-08-26 14:13:33,958 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-1ear_ppt', purging
-2022-08-26 14:13:33,958 - distributed.diskutils - INFO - Found stale lock file and directory '/tmp/dask-worker-space/worker-11qb5e70', purging
-2022-08-26 14:13:33,962 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37699
-2022-08-26 14:13:33,963 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37699
-2022-08-26 14:13:33,963 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:13:33,963 - distributed.worker - INFO -          dashboard at:            127.0.0.1:37519
-2022-08-26 14:13:33,963 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43451
-2022-08-26 14:13:33,963 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:33,963 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:13:33,963 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:33,963 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-4dgh3c8h
-2022-08-26 14:13:33,963 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:33,963 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:44061
-2022-08-26 14:13:33,963 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:44061
-2022-08-26 14:13:33,964 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:13:33,964 - distributed.worker - INFO -          dashboard at:            127.0.0.1:38717
-2022-08-26 14:13:33,964 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43451
-2022-08-26 14:13:33,964 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:33,964 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:13:33,964 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:33,964 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-cpqnxmbd
-2022-08-26 14:13:33,964 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:33,967 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37699', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:33,967 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37699
-2022-08-26 14:13:33,967 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:33,967 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:44061', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:33,968 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:44061
-2022-08-26 14:13:33,968 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:33,968 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43451
-2022-08-26 14:13:33,968 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:33,968 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43451
-2022-08-26 14:13:33,968 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:33,969 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:33,969 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:33,982 - distributed.scheduler - INFO - Receive client connection: Client-f16dc3d9-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:33,983 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:34,329 - distributed.scheduler - INFO - Remove client Client-f16dc3d9-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:34,329 - distributed.scheduler - INFO - Remove client Client-f16dc3d9-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:34,330 - distributed.scheduler - INFO - Close client connection: Client-f16dc3d9-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:34,330 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:37699
-2022-08-26 14:13:34,331 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:44061
-2022-08-26 14:13:34,331 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:37699', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:34,332 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:37699
-2022-08-26 14:13:34,332 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:44061', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:34,332 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:44061
-2022-08-26 14:13:34,332 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:13:34,332 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-c92804f8-a7bd-4b50-b809-b7b496c9d995 Address tcp://127.0.0.1:37699 Status: Status.closing
-2022-08-26 14:13:34,332 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-2eadf7d9-212f-4bdf-a878-126003fcfb08 Address tcp://127.0.0.1:44061 Status: Status.closing
-2022-08-26 14:13:34,333 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:13:34,334 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker_client.py::test_separate_thread_false 2022-08-26 14:13:34,571 - distributed.scheduler - INFO - State start
-2022-08-26 14:13:34,573 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:13:34,573 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:33721
-2022-08-26 14:13:34,573 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:37319
-2022-08-26 14:13:34,576 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34743
-2022-08-26 14:13:34,576 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34743
-2022-08-26 14:13:34,576 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:13:34,576 - distributed.worker - INFO -          dashboard at:            127.0.0.1:46601
-2022-08-26 14:13:34,576 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33721
-2022-08-26 14:13:34,576 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:34,576 - distributed.worker - INFO -               Threads:                          3
-2022-08-26 14:13:34,576 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:34,576 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ym4ks_v3
-2022-08-26 14:13:34,576 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:34,578 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:34743', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:34,578 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:34743
-2022-08-26 14:13:34,578 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:34,579 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33721
-2022-08-26 14:13:34,579 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:34,579 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:34,592 - distributed.scheduler - INFO - Receive client connection: Client-f1cad6e7-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:34,592 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:34,730 - distributed.scheduler - INFO - Remove client Client-f1cad6e7-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:34,730 - distributed.scheduler - INFO - Remove client Client-f1cad6e7-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:34,731 - distributed.scheduler - INFO - Close client connection: Client-f1cad6e7-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:34,731 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:34743
-2022-08-26 14:13:34,732 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:34743', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:34,732 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:34743
-2022-08-26 14:13:34,732 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:13:34,732 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-7168d7f6-de71-4ace-bc9a-bc4322a06148 Address tcp://127.0.0.1:34743 Status: Status.closing
-2022-08-26 14:13:34,733 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:13:34,733 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker_client.py::test_client_executor 2022-08-26 14:13:34,970 - distributed.scheduler - INFO - State start
-2022-08-26 14:13:34,971 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:13:34,972 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:43519
-2022-08-26 14:13:34,972 - distributed.scheduler - INFO -   dashboard at:           127.0.0.1:38729
-2022-08-26 14:13:34,976 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:46613
-2022-08-26 14:13:34,976 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:46613
-2022-08-26 14:13:34,976 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:13:34,976 - distributed.worker - INFO -          dashboard at:            127.0.0.1:43357
-2022-08-26 14:13:34,976 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43519
-2022-08-26 14:13:34,977 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:34,977 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:13:34,977 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:34,977 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-a94lqh93
-2022-08-26 14:13:34,977 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:34,977 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37351
-2022-08-26 14:13:34,977 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37351
-2022-08-26 14:13:34,977 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:13:34,977 - distributed.worker - INFO -          dashboard at:            127.0.0.1:44383
-2022-08-26 14:13:34,977 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:43519
-2022-08-26 14:13:34,977 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:34,978 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:13:34,978 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:34,978 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-0z_yupxf
-2022-08-26 14:13:34,978 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:34,981 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:46613', name: 0, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:34,981 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:46613
-2022-08-26 14:13:34,981 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:34,981 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37351', name: 1, status: init, memory: 0, processing: 0>
-2022-08-26 14:13:34,982 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37351
-2022-08-26 14:13:34,982 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:34,982 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43519
-2022-08-26 14:13:34,982 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:34,982 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:43519
-2022-08-26 14:13:34,982 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:34,982 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:34,983 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:34,996 - distributed.scheduler - INFO - Receive client connection: Client-f2087c56-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:34,996 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:35,083 - distributed.scheduler - INFO - Client Client-f2087c56-2583-11ed-a99d-00d861bc4509 requests to cancel 0 keys
-2022-08-26 14:13:35,094 - distributed.scheduler - INFO - Remove client Client-f2087c56-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:35,094 - distributed.scheduler - INFO - Remove client Client-f2087c56-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:35,095 - distributed.scheduler - INFO - Close client connection: Client-f2087c56-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:35,096 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:46613
-2022-08-26 14:13:35,096 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:37351
-2022-08-26 14:13:35,097 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-ff7e6638-cfba-4f8a-81bf-2427d2d13b80 Address tcp://127.0.0.1:46613 Status: Status.closing
-2022-08-26 14:13:35,097 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-9cf7905e-dba1-439b-ae6d-f658e3fa5337 Address tcp://127.0.0.1:37351 Status: Status.closing
-2022-08-26 14:13:35,098 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:46613', name: 0, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:35,098 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:46613
-2022-08-26 14:13:35,098 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:37351', name: 1, status: closing, memory: 0, processing: 0>
-2022-08-26 14:13:35,098 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:37351
-2022-08-26 14:13:35,098 - distributed.scheduler - INFO - Lost all workers
-2022-08-26 14:13:35,099 - distributed.scheduler - INFO - Scheduler closing...
-2022-08-26 14:13:35,100 - distributed.scheduler - INFO - Scheduler closing all comms
-PASSED
-distributed/tests/test_worker_client.py::test_dont_override_default_get PASSED
-distributed/tests/test_worker_client.py::test_local_client_warning PASSED
-distributed/tests/test_worker_client.py::test_closing_worker_doesnt_close_client PASSED
-distributed/tests/test_worker_client.py::test_timeout 2022-08-26 14:13:36,983 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
-2022-08-26 14:13:36,985 - distributed.scheduler - INFO - State start
-2022-08-26 14:13:36,989 - distributed.scheduler - INFO - Clear task state
-2022-08-26 14:13:36,989 - distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:46869
-2022-08-26 14:13:36,989 - distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
-2022-08-26 14:13:37,030 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:33761
-2022-08-26 14:13:37,030 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:33761
-2022-08-26 14:13:37,030 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39887
-2022-08-26 14:13:37,030 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:46869
-2022-08-26 14:13:37,030 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:37,030 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:13:37,030 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:37,030 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ce53vbw5
-2022-08-26 14:13:37,030 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:37,066 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:40697
-2022-08-26 14:13:37,066 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:40697
-2022-08-26 14:13:37,066 - distributed.worker - INFO -          dashboard at:            127.0.0.1:44535
-2022-08-26 14:13:37,066 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:46869
-2022-08-26 14:13:37,066 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:37,066 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:13:37,066 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:37,066 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-k8imipyo
-2022-08-26 14:13:37,067 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:37,347 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33761', status: init, memory: 0, processing: 0>
-2022-08-26 14:13:37,641 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33761
-2022-08-26 14:13:37,641 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:37,641 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:46869
-2022-08-26 14:13:37,641 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:37,642 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40697', status: init, memory: 0, processing: 0>
-2022-08-26 14:13:37,642 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40697
-2022-08-26 14:13:37,642 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:37,642 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:37,642 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:46869
-2022-08-26 14:13:37,643 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:37,643 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:37,648 - distributed.scheduler - INFO - Receive client connection: Client-f39d2044-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:37,648 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:37,750 - distributed.worker - WARNING - Compute Failed
-Key:       func-281e50ba1b70d963093b6792ef64deb9
-Function:  func
-args:      ()
-kwargs:    {}
-Exception: "OSError('Timed out trying to connect to tcp://127.0.0.1:46869 after 0 s')"
-
-PASSED2022-08-26 14:13:37,753 - distributed.scheduler - INFO - Remove client Client-f39d2044-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:37,753 - distributed.scheduler - INFO - Remove client Client-f39d2044-2583-11ed-a99d-00d861bc4509
-2022-08-26 14:13:37,754 - distributed.scheduler - INFO - Close client connection: Client-f39d2044-2583-11ed-a99d-00d861bc4509
-
-distributed/tests/test_worker_client.py::test_secede_without_stealing_issue_1262 PASSED
-distributed/tests/test_worker_client.py::test_compute_within_worker_client PASSED
-distributed/tests/test_worker_client.py::test_worker_client_rejoins PASSED
-distributed/tests/test_worker_client.py::test_submit_different_names XPASS
-distributed/tests/test_worker_client.py::test_secede_does_not_claim_worker PASSED
-distributed/tests/test_worker_memory.py::test_parse_memory_limit_zero PASSED
-distributed/tests/test_worker_memory.py::test_resource_limit PASSED
-distributed/tests/test_worker_memory.py::test_parse_memory_limit_worker PASSED
-distributed/tests/test_worker_memory.py::test_parse_memory_limit_nanny 2022-08-26 14:13:41,080 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:41311
-2022-08-26 14:13:41,080 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:41311
-2022-08-26 14:13:41,080 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:13:41,080 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36709
-2022-08-26 14:13:41,080 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:33433
-2022-08-26 14:13:41,080 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:41,080 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:13:41,080 - distributed.worker - INFO -                Memory:                   1.86 GiB
-2022-08-26 14:13:41,080 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-f0trp8kc
-2022-08-26 14:13:41,080 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:41,395 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:33433
-2022-08-26 14:13:41,396 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:41,396 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:41,442 - distributed.worker - INFO - Run out-of-band function 'lambda'
-2022-08-26 14:13:41,452 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:41311
-2022-08-26 14:13:41,453 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-9f5cfeab-6047-4fdc-a587-11bb60494bb0 Address tcp://127.0.0.1:41311 Status: Status.closing
-PASSED
-distributed/tests/test_worker_memory.py::test_dict_data_if_no_spill_to_disk PASSED
-distributed/tests/test_worker_memory.py::test_fail_to_pickle_execute_1 PASSED
-distributed/tests/test_worker_memory.py::test_workerstate_fail_to_pickle_execute_1[executing] PASSED
-distributed/tests/test_worker_memory.py::test_workerstate_fail_to_pickle_execute_1[long-running] PASSED
-distributed/tests/test_worker_memory.py::test_workerstate_fail_to_pickle_flight XPASS
-distributed/tests/test_worker_memory.py::test_workerstate_fail_to_pickle_flight XFAIL
-distributed/tests/test_worker_memory.py::test_fail_to_pickle_execute_2 2022-08-26 14:13:42,509 - distributed.spill - ERROR - Failed to pickle 'x'
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/spill.py", line 283, in __setitem__
-    pickled = self.dump(value)  # type: ignore
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/serialize.py", line 643, in serialize_bytelist
-    header, frames = serialize_and_split(x, **kwargs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/serialize.py", line 444, in serialize_and_split
-    header, frames = serialize(x, serializers, on_error, context)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/serialize.py", line 366, in serialize
-    raise TypeError(msg, str(x)[:10000])
-TypeError: ('Could not serialize object of type FailToPickle', '<test_worker_memory.FailToPickle object at 0x7f16f0356d60>')
-
-During handling of the above exception, another exception occurred:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/spill.py", line 114, in handle_errors
-    yield
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/spill.py", line 188, in __setitem__
-    super().__setitem__(key, value)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/zict/buffer.py", line 116, in __setitem__
-    self.fast[key] = value
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/zict/lru.py", line 99, in __setitem__
-    set_()
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/zict/lru.py", line 96, in set_
-    self.evict()
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/zict/lru.py", line 125, in evict
-    cb(k, v)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/zict/buffer.py", line 81, in fast_to_slow
-    self.slow[key] = value
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/zict/cache.py", line 65, in __setitem__
-    self.data[key] = value
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/spill.py", line 288, in __setitem__
-    raise PickleError(key, e)
-distributed.spill.PickleError: ('x', TypeError('Could not serialize object of type FailToPickle', '<test_worker_memory.FailToPickle object at 0x7f16f0356d60>'))
-PASSED
-distributed/tests/test_worker_memory.py::test_fail_to_pickle_spill 2022-08-26 14:13:42,891 - distributed.spill - ERROR - Failed to pickle 'bad'
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/spill.py", line 283, in __setitem__
-    pickled = self.dump(value)  # type: ignore
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/serialize.py", line 643, in serialize_bytelist
-    header, frames = serialize_and_split(x, **kwargs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/serialize.py", line 444, in serialize_and_split
-    header, frames = serialize(x, serializers, on_error, context)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/protocol/serialize.py", line 366, in serialize
-    raise TypeError(msg, str(x)[:10000])
-TypeError: ('Could not serialize object of type FailToPickle', '<test_worker_memory.FailToPickle object at 0x7f16f0356d60>')
-
-During handling of the above exception, another exception occurred:
-
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/spill.py", line 114, in handle_errors
-    yield
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/spill.py", line 210, in evict
-    _, _, weight = self.fast.evict()
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/zict/lru.py", line 125, in evict
-    cb(k, v)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/zict/buffer.py", line 81, in fast_to_slow
-    self.slow[key] = value
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/zict/cache.py", line 65, in __setitem__
-    self.data[key] = value
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/spill.py", line 288, in __setitem__
-    raise PickleError(key, e)
-distributed.spill.PickleError: ('bad', TypeError('Could not serialize object of type FailToPickle', '<test_worker_memory.FailToPickle object at 0x7f16f0356d60>'))
-PASSED
-distributed/tests/test_worker_memory.py::test_spill_target_threshold PASSED
-distributed/tests/test_worker_memory.py::test_spill_constrained 2022-08-26 14:13:43,518 - distributed.spill - WARNING - Spill file on disk reached capacity; keeping data in memory
-2022-08-26 14:13:43,562 - distributed.worker - ERROR - y
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 192, in wrapper
-    return method(self, *args, **kwargs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 1868, in handle_stimulus
-    super().handle_stimulus(*stims)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 3384, in handle_stimulus
-    instructions = self.state.handle_stimulus(*stims)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 1301, in handle_stimulus
-    instructions += self._transitions(recs, stimulus_id=stim.stimulus_id)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 2512, in _transitions
-    process_recs(recommendations.copy())
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 2506, in process_recs
-    a_recs, a_instructions = self._transition(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 2424, in _transition
-    recs, instructions = func(self, ts, *args, stimulus_id=stimulus_id)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 1862, in _transition_memory_released
-    recs, instructions = self._transition_generic_released(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 1806, in _transition_generic_released
-    self._purge_state(ts)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 1420, in _purge_state
-    self.data.pop(key, None)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/_collections_abc.py", line 957, in pop
-    value = self[key]
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/zict/buffer.py", line 106, in __getitem__
-    return self.slow_to_fast(key)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/zict/buffer.py", line 97, in slow_to_fast
-    self.fast[key] = value
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/zict/lru.py", line 99, in __setitem__
-    set_()
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/zict/lru.py", line 96, in set_
-    self.evict()
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/zict/lru.py", line 125, in evict
-    cb(k, v)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/zict/buffer.py", line 81, in fast_to_slow
-    self.slow[key] = value
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/zict/cache.py", line 65, in __setitem__
-    self.data[key] = value
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/spill.py", line 310, in __setitem__
-    raise MaxSpillExceeded(key)
-distributed.spill.MaxSpillExceeded: y
-2022-08-26 14:13:43,562 - distributed.core - ERROR - y
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 842, in handle_stream
-    handler(**merge(extra, msg))
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 1843, in _
-    self.handle_stimulus(event)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 192, in wrapper
-    return method(self, *args, **kwargs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 1868, in handle_stimulus
-    super().handle_stimulus(*stims)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 3384, in handle_stimulus
-    instructions = self.state.handle_stimulus(*stims)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 1301, in handle_stimulus
-    instructions += self._transitions(recs, stimulus_id=stim.stimulus_id)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 2512, in _transitions
-    process_recs(recommendations.copy())
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 2506, in process_recs
-    a_recs, a_instructions = self._transition(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 2424, in _transition
-    recs, instructions = func(self, ts, *args, stimulus_id=stimulus_id)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 1862, in _transition_memory_released
-    recs, instructions = self._transition_generic_released(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 1806, in _transition_generic_released
-    self._purge_state(ts)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 1420, in _purge_state
-    self.data.pop(key, None)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/_collections_abc.py", line 957, in pop
-    value = self[key]
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/zict/buffer.py", line 106, in __getitem__
-    return self.slow_to_fast(key)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/zict/buffer.py", line 97, in slow_to_fast
-    self.fast[key] = value
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/zict/lru.py", line 99, in __setitem__
-    set_()
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/zict/lru.py", line 96, in set_
-    self.evict()
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/zict/lru.py", line 125, in evict
-    cb(k, v)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/zict/buffer.py", line 81, in fast_to_slow
-    self.slow[key] = value
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/zict/cache.py", line 65, in __setitem__
-    self.data[key] = value
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/spill.py", line 310, in __setitem__
-    raise MaxSpillExceeded(key)
-distributed.spill.MaxSpillExceeded: y
-2022-08-26 14:13:43,565 - distributed.worker - ERROR - y
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 179, in wrapper
-    return await method(self, *args, **kwargs)  # type: ignore
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 1210, in handle_scheduler
-    await self.handle_stream(comm)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 842, in handle_stream
-    handler(**merge(extra, msg))
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 1843, in _
-    self.handle_stimulus(event)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 192, in wrapper
-    return method(self, *args, **kwargs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 1868, in handle_stimulus
-    super().handle_stimulus(*stims)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 3384, in handle_stimulus
-    instructions = self.state.handle_stimulus(*stims)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 1301, in handle_stimulus
-    instructions += self._transitions(recs, stimulus_id=stim.stimulus_id)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 2512, in _transitions
-    process_recs(recommendations.copy())
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 2506, in process_recs
-    a_recs, a_instructions = self._transition(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 2424, in _transition
-    recs, instructions = func(self, ts, *args, stimulus_id=stimulus_id)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 1862, in _transition_memory_released
-    recs, instructions = self._transition_generic_released(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 1806, in _transition_generic_released
-    self._purge_state(ts)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 1420, in _purge_state
-    self.data.pop(key, None)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/_collections_abc.py", line 957, in pop
-    value = self[key]
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/zict/buffer.py", line 106, in __getitem__
-    return self.slow_to_fast(key)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/zict/buffer.py", line 97, in slow_to_fast
-    self.fast[key] = value
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/zict/lru.py", line 99, in __setitem__
-    set_()
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/zict/lru.py", line 96, in set_
-    self.evict()
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/zict/lru.py", line 125, in evict
-    cb(k, v)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/zict/buffer.py", line 81, in fast_to_slow
-    self.slow[key] = value
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/zict/cache.py", line 65, in __setitem__
-    self.data[key] = value
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/spill.py", line 310, in __setitem__
-    raise MaxSpillExceeded(key)
-distributed.spill.MaxSpillExceeded: y
-2022-08-26 14:13:43,567 - tornado.application - ERROR - Exception in callback functools.partial(<bound method IOLoop._discard_future_result of <tornado.platform.asyncio.AsyncIOMainLoop object at 0x564042117a40>>, <Task finished name='Task-221045' coro=<Worker.handle_scheduler() done, defined at /home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py:176> exception=MaxSpillExceeded('y')>)
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/tornado/ioloop.py", line 740, in _run_callback
-    ret = callback()
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/tornado/ioloop.py", line 764, in _discard_future_result
-    future.result()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 179, in wrapper
-    return await method(self, *args, **kwargs)  # type: ignore
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 1210, in handle_scheduler
-    await self.handle_stream(comm)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 842, in handle_stream
-    handler(**merge(extra, msg))
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 1843, in _
-    self.handle_stimulus(event)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 192, in wrapper
-    return method(self, *args, **kwargs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 1868, in handle_stimulus
-    super().handle_stimulus(*stims)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 3384, in handle_stimulus
-    instructions = self.state.handle_stimulus(*stims)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 1301, in handle_stimulus
-    instructions += self._transitions(recs, stimulus_id=stim.stimulus_id)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 2512, in _transitions
-    process_recs(recommendations.copy())
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 2506, in process_recs
-    a_recs, a_instructions = self._transition(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 2424, in _transition
-    recs, instructions = func(self, ts, *args, stimulus_id=stimulus_id)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 1862, in _transition_memory_released
-    recs, instructions = self._transition_generic_released(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 1806, in _transition_generic_released
-    self._purge_state(ts)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py", line 1420, in _purge_state
-    self.data.pop(key, None)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/_collections_abc.py", line 957, in pop
-    value = self[key]
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/zict/buffer.py", line 106, in __getitem__
-    return self.slow_to_fast(key)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/zict/buffer.py", line 97, in slow_to_fast
-    self.fast[key] = value
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/zict/lru.py", line 99, in __setitem__
-    set_()
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/zict/lru.py", line 96, in set_
-    self.evict()
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/zict/lru.py", line 125, in evict
-    cb(k, v)
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/zict/buffer.py", line 81, in fast_to_slow
-    self.slow[key] = value
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/zict/cache.py", line 65, in __setitem__
-    self.data[key] = value
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/spill.py", line 310, in __setitem__
-    raise MaxSpillExceeded(key)
-distributed.spill.MaxSpillExceeded: y
-PASSED
-distributed/tests/test_worker_memory.py::test_spill_spill_threshold 2022-08-26 14:13:43,820 - distributed.worker_memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 1.04 GiB -- Worker memory limit: 0.93 GiB
-PASSED
-distributed/tests/test_worker_memory.py::test_spill_hysteresis[False-10000000000-1] PASSED
-distributed/tests/test_worker_memory.py::test_spill_hysteresis[0.7-0-1] PASSED
-distributed/tests/test_worker_memory.py::test_spill_hysteresis[0.4-0-7] PASSED
-distributed/tests/test_worker_memory.py::test_pause_executor_manual PASSED
-distributed/tests/test_worker_memory.py::test_pause_executor_with_memory_monitor 2022-08-26 14:13:45,954 - distributed.worker_memory - WARNING - Worker is at 9000% memory usage. Pausing worker.  Process memory: 838.19 GiB -- Worker memory limit: 9.31 GiB
-2022-08-26 14:13:46,037 - distributed.worker_memory - WARNING - Worker is at 0% memory usage. Resuming worker. Process memory: 0 B -- Worker memory limit: 9.31 GiB
-PASSED
-distributed/tests/test_worker_memory.py::test_pause_prevents_deps_fetch PASSED
-distributed/tests/test_worker_memory.py::test_avoid_memory_monitor_if_zero_limit_worker PASSED
-distributed/tests/test_worker_memory.py::test_avoid_memory_monitor_if_zero_limit_nanny 2022-08-26 14:13:47,825 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:41539
-2022-08-26 14:13:47,825 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:41539
-2022-08-26 14:13:47,825 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:13:47,825 - distributed.worker - INFO -          dashboard at:            127.0.0.1:36141
-2022-08-26 14:13:47,825 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34621
-2022-08-26 14:13:47,825 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:47,825 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:13:47,826 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-rpju5lqi
-2022-08-26 14:13:47,826 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:48,134 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34621
-2022-08-26 14:13:48,134 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:48,135 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:48,183 - distributed.worker - INFO - Run out-of-band function 'lambda'
-2022-08-26 14:13:48,381 - distributed.worker - INFO - Run out-of-band function 'memory_monitor_running'
-2022-08-26 14:13:48,421 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:41539
-2022-08-26 14:13:48,422 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-a4e88e4c-a781-4a73-9102-988cc57e78c5 Address tcp://127.0.0.1:41539 Status: Status.closing
-PASSED
-distributed/tests/test_worker_memory.py::test_override_data_worker PASSED
-distributed/tests/test_worker_memory.py::test_override_data_nanny 2022-08-26 14:13:49,854 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:36543
-2022-08-26 14:13:49,854 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:36543
-2022-08-26 14:13:49,854 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:13:49,854 - distributed.worker - INFO -          dashboard at:            127.0.0.1:41959
-2022-08-26 14:13:49,854 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:35571
-2022-08-26 14:13:49,854 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:49,854 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:13:49,854 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:49,854 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-a3rk0gh5
-2022-08-26 14:13:49,854 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:50,167 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:35571
-2022-08-26 14:13:50,167 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:50,167 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:50,214 - distributed.worker - INFO - Run out-of-band function 'lambda'
-2022-08-26 14:13:50,224 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36543
-2022-08-26 14:13:50,225 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-50a65683-c610-4179-96c7-e888162bfa1b Address tcp://127.0.0.1:36543 Status: Status.closing
-PASSED
-distributed/tests/test_worker_memory.py::test_override_data_vs_memory_monitor 2022-08-26 14:13:50,849 - distributed.worker_memory - WARNING - Worker is at 81% memory usage. Pausing worker.  Process memory: 7.54 GiB -- Worker memory limit: 9.31 GiB
-2022-08-26 14:13:50,909 - distributed.worker_memory - WARNING - Worker is at 0% memory usage. Resuming worker. Process memory: 0 B -- Worker memory limit: 9.31 GiB
-PASSED
-distributed/tests/test_worker_memory.py::test_manual_evict_proto 2022-08-26 14:13:51,164 - distributed.worker_memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 1.04 GiB -- Worker memory limit: 0.93 GiB
-2022-08-26 14:13:51,174 - distributed.worker_memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 668.53 MiB -- Worker memory limit: 0.93 GiB
-PASSED
-distributed/tests/test_worker_memory.py::test_nanny_terminate SKIPPED
-distributed/tests/test_worker_memory.py::test_disk_cleanup_on_terminate[False] SKIPPED
-distributed/tests/test_worker_memory.py::test_disk_cleanup_on_terminate[True] SKIPPED
-distributed/tests/test_worker_memory.py::test_pause_while_spilling 2022-08-26 14:13:51,797 - distributed.worker_memory - WARNING - Worker is at 100% memory usage. Pausing worker.  Process memory: 10.00 GiB -- Worker memory limit: 10.00 GiB
-2022-08-26 14:13:51,798 - distributed.worker_memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 10.00 GiB -- Worker memory limit: 10.00 GiB
-2022-08-26 14:13:51,800 - distributed.worker_memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 10.00 GiB -- Worker memory limit: 10.00 GiB
-2022-08-26 14:13:51,809 - distributed.worker_memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 10.00 GiB -- Worker memory limit: 10.00 GiB
-2022-08-26 14:13:51,819 - distributed.worker_memory - WARNING - Worker is at 0% memory usage. Resuming worker. Process memory: 0 B -- Worker memory limit: 10.00 GiB
-PASSED
-distributed/tests/test_worker_memory.py::test_release_evloop_while_spilling SKIPPED
-distributed/tests/test_worker_memory.py::test_deprecated_attributes[Worker-memory_limit-123000000000.0] PASSED
-distributed/tests/test_worker_memory.py::test_deprecated_attributes[Worker-memory_target_fraction-0.789] PASSED
-distributed/tests/test_worker_memory.py::test_deprecated_attributes[Worker-memory_spill_fraction-0.789] PASSED
-distributed/tests/test_worker_memory.py::test_deprecated_attributes[Worker-memory_pause_fraction-0.789] PASSED
-distributed/tests/test_worker_memory.py::test_deprecated_attributes[Nanny-memory_limit-123000000000.0] 2022-08-26 14:13:53,813 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:44959
-2022-08-26 14:13:53,813 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:44959
-2022-08-26 14:13:53,813 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39419
-2022-08-26 14:13:53,813 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:42247
-2022-08-26 14:13:53,813 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:53,813 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:13:53,813 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:53,813 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-lee4wn9h
-2022-08-26 14:13:53,813 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:54,128 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:42247
-2022-08-26 14:13:54,128 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:54,129 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:54,170 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:44959
-2022-08-26 14:13:54,171 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-8e5fb516-5b62-44ba-95dd-d8bc63203ed5 Address tcp://127.0.0.1:44959 Status: Status.closing
-PASSED
-distributed/tests/test_worker_memory.py::test_deprecated_attributes[Nanny-memory_terminate_fraction-0.789] 2022-08-26 14:13:55,329 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:42477
-2022-08-26 14:13:55,329 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:42477
-2022-08-26 14:13:55,329 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39179
-2022-08-26 14:13:55,329 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:36505
-2022-08-26 14:13:55,329 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:55,329 - distributed.worker - INFO -               Threads:                         12
-2022-08-26 14:13:55,329 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:55,329 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-t3f80zor
-2022-08-26 14:13:55,329 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:55,643 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:36505
-2022-08-26 14:13:55,643 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:55,644 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:55,672 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42477
-2022-08-26 14:13:55,672 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-39cad684-8942-4fbd-a2e1-9e51e3802427 Address tcp://127.0.0.1:42477 Status: Status.closing
-PASSED
-distributed/tests/test_worker_memory.py::test_deprecated_memory_monitor_method_worker PASSED
-distributed/tests/test_worker_memory.py::test_deprecated_memory_monitor_method_nanny 2022-08-26 14:13:57,065 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:46025
-2022-08-26 14:13:57,065 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:46025
-2022-08-26 14:13:57,065 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:13:57,065 - distributed.worker - INFO -          dashboard at:            127.0.0.1:34715
-2022-08-26 14:13:57,065 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:34147
-2022-08-26 14:13:57,065 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:57,065 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:13:57,065 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:13:57,065 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-scv0h6l_
-2022-08-26 14:13:57,065 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:57,375 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:34147
-2022-08-26 14:13:57,375 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:13:57,376 - distributed.core - INFO - Starting established connection
-2022-08-26 14:13:57,377 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:46025
-2022-08-26 14:13:57,378 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-f5751680-9c96-477c-91d7-ec2618b2d36e Address tcp://127.0.0.1:46025 Status: Status.closing
-PASSED
-distributed/tests/test_worker_memory.py::test_deprecated_params[memory_target_fraction] PASSED
-distributed/tests/test_worker_memory.py::test_deprecated_params[memory_spill_fraction] PASSED
-distributed/tests/test_worker_memory.py::test_deprecated_params[memory_pause_fraction] PASSED
-distributed/tests/test_worker_state_machine.py::test_instruction_match PASSED
-distributed/tests/test_worker_state_machine.py::test_TaskState_tracking PASSED
-distributed/tests/test_worker_state_machine.py::test_TaskState_get_nbytes PASSED
-distributed/tests/test_worker_state_machine.py::test_TaskState_eq PASSED
-distributed/tests/test_worker_state_machine.py::test_TaskState__to_dict PASSED
-distributed/tests/test_worker_state_machine.py::test_TaskState_repr PASSED
-distributed/tests/test_worker_state_machine.py::test_WorkerState__to_dict PASSED
-distributed/tests/test_worker_state_machine.py::test_WorkerState_pickle PASSED
-distributed/tests/test_worker_state_machine.py::test_pickle_exceptions[False-InvalidTransition-kwargs0] PASSED
-distributed/tests/test_worker_state_machine.py::test_pickle_exceptions[False-TransitionCounterMaxExceeded-kwargs1] PASSED
-distributed/tests/test_worker_state_machine.py::test_pickle_exceptions[False-InvalidTaskState-kwargs2] PASSED
-distributed/tests/test_worker_state_machine.py::test_pickle_exceptions[True-InvalidTransition-kwargs0] PASSED
-distributed/tests/test_worker_state_machine.py::test_pickle_exceptions[True-TransitionCounterMaxExceeded-kwargs1] PASSED
-distributed/tests/test_worker_state_machine.py::test_pickle_exceptions[True-InvalidTaskState-kwargs2] PASSED
-distributed/tests/test_worker_state_machine.py::test_slots[Instruction] PASSED
-distributed/tests/test_worker_state_machine.py::test_slots[GatherDep] PASSED
-distributed/tests/test_worker_state_machine.py::test_slots[Execute] PASSED
-distributed/tests/test_worker_state_machine.py::test_slots[RetryBusyWorkerLater] PASSED
-distributed/tests/test_worker_state_machine.py::test_slots[SendMessageToScheduler] PASSED
-distributed/tests/test_worker_state_machine.py::test_slots[TaskFinishedMsg] PASSED
-distributed/tests/test_worker_state_machine.py::test_slots[TaskErredMsg] PASSED
-distributed/tests/test_worker_state_machine.py::test_slots[ReleaseWorkerDataMsg] PASSED
-distributed/tests/test_worker_state_machine.py::test_slots[RescheduleMsg] PASSED
-distributed/tests/test_worker_state_machine.py::test_slots[LongRunningMsg] PASSED
-distributed/tests/test_worker_state_machine.py::test_slots[AddKeysMsg] PASSED
-distributed/tests/test_worker_state_machine.py::test_slots[RequestRefreshWhoHasMsg] PASSED
-distributed/tests/test_worker_state_machine.py::test_slots[StealResponseMsg] PASSED
-distributed/tests/test_worker_state_machine.py::test_slots[StateMachineEvent] PASSED
-distributed/tests/test_worker_state_machine.py::test_slots[PauseEvent] PASSED
-distributed/tests/test_worker_state_machine.py::test_slots[UnpauseEvent] PASSED
-distributed/tests/test_worker_state_machine.py::test_slots[RetryBusyWorkerEvent] PASSED
-distributed/tests/test_worker_state_machine.py::test_slots[GatherDepDoneEvent] PASSED
-distributed/tests/test_worker_state_machine.py::test_slots[GatherDepSuccessEvent] PASSED
-distributed/tests/test_worker_state_machine.py::test_slots[GatherDepBusyEvent] PASSED
-distributed/tests/test_worker_state_machine.py::test_slots[GatherDepNetworkFailureEvent] PASSED
-distributed/tests/test_worker_state_machine.py::test_slots[GatherDepFailureEvent] PASSED
-distributed/tests/test_worker_state_machine.py::test_slots[ComputeTaskEvent] PASSED
-distributed/tests/test_worker_state_machine.py::test_slots[ExecuteDoneEvent] PASSED
-distributed/tests/test_worker_state_machine.py::test_slots[ExecuteSuccessEvent] PASSED
-distributed/tests/test_worker_state_machine.py::test_slots[ExecuteFailureEvent] PASSED
-distributed/tests/test_worker_state_machine.py::test_slots[RescheduleEvent] PASSED
-distributed/tests/test_worker_state_machine.py::test_slots[CancelComputeEvent] PASSED
-distributed/tests/test_worker_state_machine.py::test_slots[FindMissingEvent] PASSED
-distributed/tests/test_worker_state_machine.py::test_slots[RefreshWhoHasEvent] PASSED
-distributed/tests/test_worker_state_machine.py::test_slots[AcquireReplicasEvent] PASSED
-distributed/tests/test_worker_state_machine.py::test_slots[RemoveReplicasEvent] PASSED
-distributed/tests/test_worker_state_machine.py::test_slots[FreeKeysEvent] PASSED
-distributed/tests/test_worker_state_machine.py::test_slots[StealRequestEvent] PASSED
-distributed/tests/test_worker_state_machine.py::test_slots[UpdateDataEvent] PASSED
-distributed/tests/test_worker_state_machine.py::test_slots[SecedeEvent] PASSED
-distributed/tests/test_worker_state_machine.py::test_sendmsg_to_dict PASSED
-distributed/tests/test_worker_state_machine.py::test_merge_recs_instructions PASSED
-distributed/tests/test_worker_state_machine.py::test_event_to_dict_with_annotations PASSED
-distributed/tests/test_worker_state_machine.py::test_event_to_dict_without_annotations PASSED
-distributed/tests/test_worker_state_machine.py::test_computetask_to_dict PASSED
-distributed/tests/test_worker_state_machine.py::test_computetask_dummy PASSED
-distributed/tests/test_worker_state_machine.py::test_updatedata_to_dict PASSED
-distributed/tests/test_worker_state_machine.py::test_executesuccess_to_dict PASSED
-distributed/tests/test_worker_state_machine.py::test_executesuccess_dummy PASSED
-distributed/tests/test_worker_state_machine.py::test_executefailure_to_dict PASSED
-distributed/tests/test_worker_state_machine.py::test_executefailure_dummy PASSED
-distributed/tests/test_worker_state_machine.py::test_fetch_to_compute PASSED
-distributed/tests/test_worker_state_machine.py::test_fetch_via_amm_to_compute PASSED
-distributed/tests/test_worker_state_machine.py::test_lose_replica_during_fetch[False] PASSED
-distributed/tests/test_worker_state_machine.py::test_lose_replica_during_fetch[True] PASSED
-distributed/tests/test_worker_state_machine.py::test_fetch_to_missing_on_busy PASSED
-distributed/tests/test_worker_state_machine.py::test_new_replica_while_all_workers_in_flight PASSED
-distributed/tests/test_worker_state_machine.py::test_cancelled_while_in_flight PASSED
-distributed/tests/test_worker_state_machine.py::test_in_memory_while_in_flight PASSED
-distributed/tests/test_worker_state_machine.py::test_forget_data_needed PASSED
-distributed/tests/test_worker_state_machine.py::test_missing_handle_compute_dependency PASSED
-distributed/tests/test_worker_state_machine.py::test_missing_to_waiting PASSED
-distributed/tests/test_worker_state_machine.py::test_task_state_instance_are_garbage_collected 2022-08-26 14:14:02,867 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34073
-2022-08-26 14:14:02,867 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34073
-2022-08-26 14:14:02,867 - distributed.worker - INFO -           Worker name:                          1
-2022-08-26 14:14:02,867 - distributed.worker - INFO -          dashboard at:            127.0.0.1:40235
-2022-08-26 14:14:02,867 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:42811
-2022-08-26 14:14:02,867 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:14:02,867 - distributed.worker - INFO -               Threads:                          2
-2022-08-26 14:14:02,867 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:14:02,867 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-tsr5735t
-2022-08-26 14:14:02,867 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:14:02,874 - distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:46377
-2022-08-26 14:14:02,874 - distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:46377
-2022-08-26 14:14:02,874 - distributed.worker - INFO -           Worker name:                          0
-2022-08-26 14:14:02,874 - distributed.worker - INFO -          dashboard at:            127.0.0.1:39441
-2022-08-26 14:14:02,874 - distributed.worker - INFO - Waiting to connect to:      tcp://127.0.0.1:42811
-2022-08-26 14:14:02,874 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:14:02,874 - distributed.worker - INFO -               Threads:                          1
-2022-08-26 14:14:02,874 - distributed.worker - INFO -                Memory:                  62.82 GiB
-2022-08-26 14:14:02,874 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-36jo84td
-2022-08-26 14:14:02,874 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:14:03,175 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:42811
-2022-08-26 14:14:03,175 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:14:03,175 - distributed.core - INFO - Starting established connection
-2022-08-26 14:14:03,189 - distributed.worker - INFO -         Registered to:      tcp://127.0.0.1:42811
-2022-08-26 14:14:03,189 - distributed.worker - INFO - -------------------------------------------------
-2022-08-26 14:14:03,189 - distributed.core - INFO - Starting established connection
-2022-08-26 14:14:03,441 - distributed.worker - INFO - Run out-of-band function 'check'
-/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py:3498: FutureWarning: The `Worker.tasks` attribute has been moved to `Worker.state.tasks`
-  warnings.warn(
-2022-08-26 14:14:03,443 - distributed.worker - INFO - Run out-of-band function 'check'
-/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker_state_machine.py:3498: FutureWarning: The `Worker.tasks` attribute has been moved to `Worker.state.tasks`
-  warnings.warn(
-2022-08-26 14:14:03,731 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:46377
-2022-08-26 14:14:03,732 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:34073
-2022-08-26 14:14:03,732 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-05e16d53-1e34-46d4-a84f-ebc35df2f490 Address tcp://127.0.0.1:46377 Status: Status.closing
-2022-08-26 14:14:03,732 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-2cc53ebf-b5ac-4d90-bfcc-2db84ab79de0 Address tcp://127.0.0.1:34073 Status: Status.closing
-PASSED
-distributed/tests/test_worker_state_machine.py::test_fetch_to_missing_on_refresh_who_has PASSED
-distributed/tests/test_worker_state_machine.py::test_fetch_to_missing_on_network_failure 2022-08-26 14:14:04,504 - distributed.core - ERROR - Exception while handling op get_data
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 770, in _handle_comm
-    result = await result
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/tests/test_worker_state_machine.py", line 969, in get_data
-    raise OSError("fake error")
-OSError: fake error
-2022-08-26 14:14:04,506 - distributed.worker - ERROR - Worker stream died during communication: tcp://127.0.0.1:35331
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 1992, in gather_dep
-    response = await get_data_from_worker(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 2731, in get_data_from_worker
-    return await retry_operation(_get_data, operation="get_data_from_worker")
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/utils_comm.py", line 383, in retry_operation
-    return await retry(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/utils_comm.py", line 368, in retry
-    return await coro()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 2711, in _get_data
-    response = await send_recv(
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 944, in send_recv
-    raise exc.with_traceback(tb)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 770, in _handle_comm
-    result = await result
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/tests/test_worker_state_machine.py", line 969, in get_data
-    raise OSError("fake error")
-OSError: fake error
-PASSED
-distributed/tests/test_worker_state_machine.py::test_deprecated_worker_attributes PASSED
-distributed/tests/test_worker_state_machine.py::test_aggregate_gather_deps[10000000-3] PASSED
-distributed/tests/test_worker_state_machine.py::test_aggregate_gather_deps[20000000-2] PASSED
-distributed/tests/test_worker_state_machine.py::test_aggregate_gather_deps[30000000-1] PASSED
-distributed/tests/test_worker_state_machine.py::test_gather_priority PASSED
-distributed/tests/test_worker_state_machine.py::test_task_acquires_resources[executing] PASSED
-distributed/tests/test_worker_state_machine.py::test_task_acquires_resources[long-running] PASSED
-distributed/tests/test_worker_state_machine.py::test_task_releases_resources[executing-ExecuteSuccessEvent] PASSED
-distributed/tests/test_worker_state_machine.py::test_task_releases_resources[executing-ExecuteFailureEvent] PASSED
-distributed/tests/test_worker_state_machine.py::test_task_releases_resources[executing-RescheduleEvent] PASSED
-distributed/tests/test_worker_state_machine.py::test_task_releases_resources[long-running-ExecuteSuccessEvent] PASSED
-distributed/tests/test_worker_state_machine.py::test_task_releases_resources[long-running-ExecuteFailureEvent] PASSED
-distributed/tests/test_worker_state_machine.py::test_task_releases_resources[long-running-RescheduleEvent] PASSED
-distributed/tests/test_worker_state_machine.py::test_task_with_dependencies_acquires_resources PASSED
-distributed/tests/test_worker_state_machine.py::test_resumed_task_releases_resources[executing-ExecuteSuccessEvent] PASSED
-distributed/tests/test_worker_state_machine.py::test_resumed_task_releases_resources[executing-ExecuteFailureEvent] PASSED
-distributed/tests/test_worker_state_machine.py::test_resumed_task_releases_resources[executing-RescheduleEvent] PASSED
-distributed/tests/test_worker_state_machine.py::test_resumed_task_releases_resources[long-running-ExecuteSuccessEvent] PASSED
-distributed/tests/test_worker_state_machine.py::test_resumed_task_releases_resources[long-running-ExecuteFailureEvent] PASSED
-distributed/tests/test_worker_state_machine.py::test_resumed_task_releases_resources[long-running-RescheduleEvent] PASSED
-distributed/tests/test_worker_state_machine.py::test_clean_log PASSED
-distributed/tests/test_worker_state_machine.py::test_running_task_in_all_running_tasks[executing] PASSED
-distributed/tests/test_worker_state_machine.py::test_running_task_in_all_running_tasks[long-running] PASSED
-distributed/tests/test_worker_state_machine.py::test_done_task_not_in_all_running_tasks[executing-ExecuteSuccessEvent] PASSED
-distributed/tests/test_worker_state_machine.py::test_done_task_not_in_all_running_tasks[executing-ExecuteFailureEvent] PASSED
-distributed/tests/test_worker_state_machine.py::test_done_task_not_in_all_running_tasks[executing-RescheduleEvent] PASSED
-distributed/tests/test_worker_state_machine.py::test_done_task_not_in_all_running_tasks[long-running-ExecuteSuccessEvent] PASSED
-distributed/tests/test_worker_state_machine.py::test_done_task_not_in_all_running_tasks[long-running-ExecuteFailureEvent] PASSED
-distributed/tests/test_worker_state_machine.py::test_done_task_not_in_all_running_tasks[long-running-RescheduleEvent] PASSED
-distributed/tests/test_worker_state_machine.py::test_done_resumed_task_not_in_all_running_tasks[executing-ExecuteSuccessEvent] PASSED
-distributed/tests/test_worker_state_machine.py::test_done_resumed_task_not_in_all_running_tasks[executing-ExecuteFailureEvent] PASSED
-distributed/tests/test_worker_state_machine.py::test_done_resumed_task_not_in_all_running_tasks[executing-RescheduleEvent] PASSED
-distributed/tests/test_worker_state_machine.py::test_done_resumed_task_not_in_all_running_tasks[long-running-ExecuteSuccessEvent] PASSED
-distributed/tests/test_worker_state_machine.py::test_done_resumed_task_not_in_all_running_tasks[long-running-ExecuteFailureEvent] PASSED
-distributed/tests/test_worker_state_machine.py::test_done_resumed_task_not_in_all_running_tasks[long-running-RescheduleEvent] PASSED
-distributed/tests/test_worker_state_machine.py::test_gather_dep_failure XPASS
-distributed/tests/test_worker_state_machine.py::test_gather_dep_failure XFAIL
-
-==================================== ERRORS ====================================
-___________________ ERROR at teardown of test_basic_no_loop ____________________
-
-    @pytest.fixture
-    def cleanup():
->       with clean():
-
-distributed/utils_test.py:1781: 
-_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
-../../../../../install.20220728/lib/python3.10/contextlib.py:142: in __exit__
-    next(self.gen)
-distributed/utils_test.py:1775: in clean
-    with check_instances() if instances else nullcontext():
-../../../../../install.20220728/lib/python3.10/contextlib.py:142: in __exit__
-    next(self.gen)
-_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
-
-    @contextmanager
-    def check_instances():
-        Client._instances.clear()
-        Worker._instances.clear()
-        Scheduler._instances.clear()
-        SpecCluster._instances.clear()
-        Worker._initialized_clients.clear()
-        SchedulerTaskState._instances.clear()
-        WorkerTaskState._instances.clear()
-        Nanny._instances.clear()
-        _global_clients.clear()
-        Comm._instances.clear()
-    
-        yield
-    
-        start = time()
-        while set(_global_clients):
-            sleep(0.1)
-            assert time() < start + 10
-    
-        _global_clients.clear()
-    
-        for w in Worker._instances:
-            with suppress(RuntimeError):  # closed IOLoop
-                w.loop.add_callback(w.close, executor_wait=False)
-                if w.status in WORKER_ANY_RUNNING:
-                    w.loop.add_callback(w.close)
-        Worker._instances.clear()
-    
-        start = time()
-        while any(c.status != "closed" for c in Worker._initialized_clients):
-            sleep(0.1)
-            assert time() < start + 10
-        Worker._initialized_clients.clear()
-    
-        for _ in range(5):
-            if all(c.closed() for c in Comm._instances):
-                break
-            else:
-                sleep(0.1)
-        else:
-            L = [c for c in Comm._instances if not c.closed()]
-            Comm._instances.clear()
-            raise ValueError("Unclosed Comms", L)
-    
-        assert all(
-            n.status in {Status.closed, Status.init, Status.failed}
-            for n in Nanny._instances
-        ), {n: n.status for n in Nanny._instances}
-    
-        # assert not list(SpecCluster._instances)  # TODO
->       assert all(c.status == Status.closed for c in SpecCluster._instances), list(
-            SpecCluster._instances
-        )
-E       AssertionError: [<[AttributeError("'LocalCluster' object has no attribute '_cluster_info'") raised in repr()] LocalCluster object at 0x56403e011cd0>]
-E       assert False
-E        +  where False = all(<generator object check_instances.<locals>.<genexpr> at 0x56403f99d460>)
-
-distributed/utils_test.py:1740: AssertionError
-____________ ERROR at teardown of test_loop_started_in_constructor _____________
-
-    @pytest.fixture
-    def cleanup():
->       with clean():
-
-distributed/utils_test.py:1781: 
-_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
-../../../../../install.20220728/lib/python3.10/contextlib.py:142: in __exit__
-    next(self.gen)
-distributed/utils_test.py:1775: in clean
-    with check_instances() if instances else nullcontext():
-../../../../../install.20220728/lib/python3.10/contextlib.py:142: in __exit__
-    next(self.gen)
-distributed/utils_test.py:1740: in check_instances
-    assert all(c.status == Status.closed for c in SpecCluster._instances), list(
-_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
-
-.0 = <generator object WeakSet.__iter__ at 0x56403f7224a0>
-
-    assert all(c.status == Status.closed for c in SpecCluster._instances), list(
-        SpecCluster._instances
->   )
-E   AttributeError: 'SpecCluster' object has no attribute 'status'
-
-distributed/utils_test.py:1742: AttributeError
-___________ ERROR at teardown of test_close_loop_sync_start_new_loop ___________
-
-    @pytest.fixture
-    def cleanup():
->       with clean():
-
-distributed/utils_test.py:1781: 
-_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
-../../../../../install.20220728/lib/python3.10/contextlib.py:142: in __exit__
-    next(self.gen)
-distributed/utils_test.py:1775: in clean
-    with check_instances() if instances else nullcontext():
-../../../../../install.20220728/lib/python3.10/contextlib.py:142: in __exit__
-    next(self.gen)
-_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
-
-    @contextmanager
-    def check_instances():
-        Client._instances.clear()
-        Worker._instances.clear()
-        Scheduler._instances.clear()
-        SpecCluster._instances.clear()
-        Worker._initialized_clients.clear()
-        SchedulerTaskState._instances.clear()
-        WorkerTaskState._instances.clear()
-        Nanny._instances.clear()
-        _global_clients.clear()
-        Comm._instances.clear()
-    
-        yield
-    
-        start = time()
-        while set(_global_clients):
-            sleep(0.1)
-            assert time() < start + 10
-    
-        _global_clients.clear()
-    
-        for w in Worker._instances:
-            with suppress(RuntimeError):  # closed IOLoop
-                w.loop.add_callback(w.close, executor_wait=False)
-                if w.status in WORKER_ANY_RUNNING:
-                    w.loop.add_callback(w.close)
-        Worker._instances.clear()
-    
-        start = time()
-        while any(c.status != "closed" for c in Worker._initialized_clients):
-            sleep(0.1)
-            assert time() < start + 10
-        Worker._initialized_clients.clear()
-    
-        for _ in range(5):
-            if all(c.closed() for c in Comm._instances):
-                break
-            else:
-                sleep(0.1)
-        else:
-            L = [c for c in Comm._instances if not c.closed()]
-            Comm._instances.clear()
-            raise ValueError("Unclosed Comms", L)
-    
-        assert all(
-            n.status in {Status.closed, Status.init, Status.failed}
-            for n in Nanny._instances
-        ), {n: n.status for n in Nanny._instances}
-    
-        # assert not list(SpecCluster._instances)  # TODO
->       assert all(c.status == Status.closed for c in SpecCluster._instances), list(
-            SpecCluster._instances
-        )
-E       AssertionError: [<[AttributeError("'LocalCluster' object has no attribute '_cluster_info'") raised in repr()] LocalCluster object at 0x56403e67d910>]
-E       assert False
-E        +  where False = all(<generator object check_instances.<locals>.<genexpr> at 0x5640417f52e0>)
-
-distributed/utils_test.py:1740: AssertionError
-=================================== FAILURES ===================================
-______________________________ test_basic_no_loop ______________________________
-
-cleanup = None
-
-    @pytest.mark.filterwarnings("ignore:There is no current event loop:DeprecationWarning")
-    def test_basic_no_loop(cleanup):
-        loop = None
-        try:
->           with LocalCluster(
-                n_workers=0, silence_logs=False, dashboard_address=":0", loop=None
-            ) as cluster:
-
-distributed/deploy/tests/test_adaptive.py:296: 
-_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
-distributed/deploy/local.py:238: in __init__
-    super().__init__(
-distributed/deploy/spec.py:259: in __init__
-    super().__init__(
-distributed/deploy/cluster.py:69: in __init__
-    self._loop_runner = LoopRunner(loop=loop, asynchronous=asynchronous)
-distributed/utils.py:448: in __init__
-    self._loop = IOLoop()
-../../../../../install.20220728/lib/python3.10/site-packages/tornado/util.py:276: in __new__
-    instance.initialize(*args, **init_kwargs)
-../../../../../install.20220728/lib/python3.10/site-packages/tornado/platform/asyncio.py:339: in initialize
-    super().initialize(**kwargs)
-../../../../../install.20220728/lib/python3.10/site-packages/tornado/platform/asyncio.py:140: in initialize
-    super().initialize(**kwargs)
-../../../../../install.20220728/lib/python3.10/site-packages/tornado/ioloop.py:350: in initialize
-    self.make_current()
-_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
-
-self = <tornado.platform.asyncio.AsyncIOLoop object at 0x56403de5aed0>
-
-    def make_current(self) -> None:
->       warnings.warn(
-            "make_current is deprecated; start the event loop first",
-            DeprecationWarning,
-            stacklevel=2,
-        )
-E       DeprecationWarning: make_current is deprecated; start the event loop first
-
-../../../../../install.20220728/lib/python3.10/site-packages/tornado/platform/asyncio.py:353: DeprecationWarning
-____________________________ test_adapt_then_manual ____________________________
-
-cls = <class '_pytest.runner.CallInfo'>
-func = <function call_runtest_hook.<locals>.<lambda> at 0x56403eb87300>
-when = 'call'
-reraise = (<class '_pytest.outcomes.Exit'>, <class 'KeyboardInterrupt'>)
-
-    @classmethod
-    def from_call(
-        cls,
-        func: "Callable[[], TResult]",
-        when: "Literal['collect', 'setup', 'call', 'teardown']",
-        reraise: Optional[
-            Union[Type[BaseException], Tuple[Type[BaseException], ...]]
-        ] = None,
-    ) -> "CallInfo[TResult]":
-        """Call func, wrapping the result in a CallInfo.
-    
-        :param func:
-            The function to call. Called without arguments.
-        :param when:
-            The phase in which the function is called.
-        :param reraise:
-            Exception or exceptions that shall propagate if raised by the
-            function, instead of being wrapped in the CallInfo.
-        """
-        excinfo = None
-        start = timing.time()
-        precise_start = timing.perf_counter()
-        try:
->           result: Optional[TResult] = func()
-
-../../../../../install.20220728/lib/python3.10/site-packages/_pytest/runner.py:338: 
-_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
-../../../../../install.20220728/lib/python3.10/site-packages/_pytest/runner.py:259: in <lambda>
-    lambda: ihook(item=item, **kwds), when=when, reraise=reraise
-../../../../../install.20220728/lib/python3.10/site-packages/pluggy/hooks.py:286: in __call__
-    return self._hookexec(self, self.get_hookimpls(), kwargs)
-../../../../../install.20220728/lib/python3.10/site-packages/pluggy/manager.py:93: in _hookexec
-    return self._inner_hookexec(hook, methods, kwargs)
-../../../../../install.20220728/lib/python3.10/site-packages/pluggy/manager.py:84: in <lambda>
-    self._inner_hookexec = lambda hook, methods, kwargs: hook.multicall(
-../../../../../install.20220728/lib/python3.10/site-packages/_pytest/unraisableexception.py:88: in pytest_runtest_call
-    yield from unraisable_exception_runtest_hook()
-_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
-
-    def unraisable_exception_runtest_hook() -> Generator[None, None, None]:
-        with catch_unraisable_exception() as cm:
-            yield
-            if cm.unraisable:
-                if cm.unraisable.err_msg is not None:
-                    err_msg = cm.unraisable.err_msg
-                else:
-                    err_msg = "Exception ignored in"
-                msg = f"{err_msg}: {cm.unraisable.object!r}\n\n"
-                msg += "".join(
-                    traceback.format_exception(
-                        cm.unraisable.exc_type,
-                        cm.unraisable.exc_value,
-                        cm.unraisable.exc_traceback,
-                    )
-                )
->               warnings.warn(pytest.PytestUnraisableExceptionWarning(msg))
-E               pytest.PytestUnraisableExceptionWarning: Exception ignored in: <function Cluster.__del__ at 0x564038596c60>
-E               
-E               Traceback (most recent call last):
-E                 File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/deploy/cluster.py", line 224, in __del__
-E                   _warn(f"unclosed cluster {self_r}", ResourceWarning, source=self)
-E               ResourceWarning: unclosed cluster with a broken __repr__ <distributed.deploy.local.LocalCluster object at 0x56403e011cd0>
-
-../../../../../install.20220728/lib/python3.10/site-packages/_pytest/unraisableexception.py:78: PytestUnraisableExceptionWarning
------------------------------- Captured log call -------------------------------
-ERROR    asyncio.events:utils.py:825 
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/utils.py", line 799, in wrapper
-    return await func(*args, **kwargs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 1443, in close
-    await self.finished()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 447, in finished
-    await self._event_finished.wait()
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/locks.py", line 214, in wait
-    await fut
-asyncio.exceptions.CancelledError
-ERROR    asyncio.events:utils.py:825 
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/utils.py", line 799, in wrapper
-    return await func(*args, **kwargs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 1443, in close
-    await self.finished()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 447, in finished
-    await self._event_finished.wait()
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/locks.py", line 214, in wait
-    await fut
-asyncio.exceptions.CancelledError
-ERROR    asyncio.events:utils.py:825 
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/utils.py", line 799, in wrapper
-    return await func(*args, **kwargs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 1443, in close
-    await self.finished()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 447, in finished
-    await self._event_finished.wait()
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/locks.py", line 214, in wait
-    await fut
-asyncio.exceptions.CancelledError
-ERROR    asyncio.events:utils.py:825 
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/utils.py", line 799, in wrapper
-    return await func(*args, **kwargs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 1443, in close
-    await self.finished()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 447, in finished
-    await self._event_finished.wait()
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/locks.py", line 214, in wait
-    await fut
-asyncio.exceptions.CancelledError
-ERROR    asyncio.events:utils.py:825 
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/utils.py", line 799, in wrapper
-    return await func(*args, **kwargs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 1443, in close
-    await self.finished()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 447, in finished
-    await self._event_finished.wait()
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/locks.py", line 214, in wait
-    await fut
-asyncio.exceptions.CancelledError
-ERROR    asyncio.events:utils.py:825 
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/utils.py", line 799, in wrapper
-    return await func(*args, **kwargs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 1443, in close
-    await self.finished()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 447, in finished
-    await self._event_finished.wait()
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/locks.py", line 214, in wait
-    await fut
-asyncio.exceptions.CancelledError
-ERROR    asyncio.events:utils.py:825 
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/utils.py", line 799, in wrapper
-    return await func(*args, **kwargs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 1443, in close
-    await self.finished()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 447, in finished
-    await self._event_finished.wait()
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/locks.py", line 214, in wait
-    await fut
-asyncio.exceptions.CancelledError
-ERROR    asyncio.events:utils.py:825 
-Traceback (most recent call last):
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/utils.py", line 799, in wrapper
-    return await func(*args, **kwargs)
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/worker.py", line 1443, in close
-    await self.finished()
-  File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/core.py", line 447, in finished
-    await self._event_finished.wait()
-  File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/locks.py", line 214, in wait
-    await fut
-asyncio.exceptions.CancelledError
-_______________________ test_loop_started_in_constructor _______________________
-
-cleanup = None
-
-    @pytest.mark.filterwarnings("ignore:There is no current event loop:DeprecationWarning")
-    def test_loop_started_in_constructor(cleanup):
-        # test that SpecCluster.__init__ starts a loop in another thread
->       cluster = SpecCluster(worker_spec, scheduler=scheduler, loop=None)
-
-distributed/deploy/tests/test_spec_cluster.py:88: 
-_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
-distributed/deploy/spec.py:259: in __init__
-    super().__init__(
-distributed/deploy/cluster.py:69: in __init__
-    self._loop_runner = LoopRunner(loop=loop, asynchronous=asynchronous)
-distributed/utils.py:448: in __init__
-    self._loop = IOLoop()
-../../../../../install.20220728/lib/python3.10/site-packages/tornado/util.py:276: in __new__
-    instance.initialize(*args, **init_kwargs)
-../../../../../install.20220728/lib/python3.10/site-packages/tornado/platform/asyncio.py:339: in initialize
-    super().initialize(**kwargs)
-../../../../../install.20220728/lib/python3.10/site-packages/tornado/platform/asyncio.py:140: in initialize
-    super().initialize(**kwargs)
-../../../../../install.20220728/lib/python3.10/site-packages/tornado/ioloop.py:350: in initialize
-    self.make_current()
-_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
-
-self = <tornado.platform.asyncio.AsyncIOLoop object at 0x56403e305600>
-
-    def make_current(self) -> None:
->       warnings.warn(
-            "make_current is deprecated; start the event loop first",
-            DeprecationWarning,
-            stacklevel=2,
-        )
-E       DeprecationWarning: make_current is deprecated; start the event loop first
-
-../../../../../install.20220728/lib/python3.10/site-packages/tornado/platform/asyncio.py:353: DeprecationWarning
-_______________________ test_startup_close_startup_sync ________________________
-
-loop = <tornado.platform.asyncio.AsyncIOMainLoop object at 0x7f15842703e0>
-
-    @pytest.mark.filterwarnings("ignore:There is no current event loop:DeprecationWarning")
-    def test_startup_close_startup_sync(loop):
-        with cluster() as (s, [a, b]):
-            with Client(s["address"], loop=loop) as c:
-                sleep(0.1)
->           with Client(s["address"], loop=None) as c:
-
-distributed/tests/test_client.py:2883: 
-_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
-distributed/client.py:878: in __init__
-    self._loop_runner = LoopRunner(loop=loop, asynchronous=asynchronous)
-distributed/utils.py:448: in __init__
-    self._loop = IOLoop()
-../../../../../install.20220728/lib/python3.10/site-packages/tornado/util.py:276: in __new__
-    instance.initialize(*args, **init_kwargs)
-../../../../../install.20220728/lib/python3.10/site-packages/tornado/platform/asyncio.py:339: in initialize
-    super().initialize(**kwargs)
-../../../../../install.20220728/lib/python3.10/site-packages/tornado/platform/asyncio.py:140: in initialize
-    super().initialize(**kwargs)
-../../../../../install.20220728/lib/python3.10/site-packages/tornado/ioloop.py:350: in initialize
-    self.make_current()
-_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
-
-self = <tornado.platform.asyncio.AsyncIOLoop object at 0x56403f69e1e0>
-
-    def make_current(self) -> None:
->       warnings.warn(
-            "make_current is deprecated; start the event loop first",
-            DeprecationWarning,
-            stacklevel=2,
-        )
-E       DeprecationWarning: make_current is deprecated; start the event loop first
-
-../../../../../install.20220728/lib/python3.10/site-packages/tornado/platform/asyncio.py:353: DeprecationWarning
-_____________________ test_client_async_before_loop_starts _____________________
-
-cleanup = None
-
-    @pytest.mark.filterwarnings("ignore:There is no current event loop:DeprecationWarning")
-    def test_client_async_before_loop_starts(cleanup):
-        async def close():
-            async with client:
-                pass
-    
->       with pristine_loop() as loop:
-
-distributed/tests/test_client.py:5539: 
-_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
-../../../../../install.20220728/lib/python3.10/contextlib.py:135: in __enter__
-    return next(self.gen)
-distributed/utils_test.py:158: in pristine_loop
-    IOLoop.clear_instance()
-../../../../../install.20220728/lib/python3.10/site-packages/tornado/ioloop.py:227: in clear_instance
-    IOLoop.clear_current()
-_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
-
-    @staticmethod
-    def clear_current() -> None:
-        """Clears the `IOLoop` for the current thread.
-    
-        Intended primarily for use by test frameworks in between tests.
-    
-        .. versionchanged:: 5.0
-           This method also clears the current `asyncio` event loop.
-        .. deprecated:: 6.2
-        """
->       warnings.warn(
-            "clear_current is deprecated",
-            DeprecationWarning,
-            stacklevel=2,
-        )
-E       DeprecationWarning: clear_current is deprecated
-
-../../../../../install.20220728/lib/python3.10/site-packages/tornado/ioloop.py:317: DeprecationWarning
-___________________ test_get_client_functions_spawn_clusters ___________________
-
-cls = <class '_pytest.runner.CallInfo'>
-func = <function call_runtest_hook.<locals>.<lambda> at 0x5640410d8100>
-when = 'call'
-reraise = (<class '_pytest.outcomes.Exit'>, <class 'KeyboardInterrupt'>)
-
-    @classmethod
-    def from_call(
-        cls,
-        func: "Callable[[], TResult]",
-        when: "Literal['collect', 'setup', 'call', 'teardown']",
-        reraise: Optional[
-            Union[Type[BaseException], Tuple[Type[BaseException], ...]]
-        ] = None,
-    ) -> "CallInfo[TResult]":
-        """Call func, wrapping the result in a CallInfo.
-    
-        :param func:
-            The function to call. Called without arguments.
-        :param when:
-            The phase in which the function is called.
-        :param reraise:
-            Exception or exceptions that shall propagate if raised by the
-            function, instead of being wrapped in the CallInfo.
-        """
-        excinfo = None
-        start = timing.time()
-        precise_start = timing.perf_counter()
-        try:
->           result: Optional[TResult] = func()
-
-../../../../../install.20220728/lib/python3.10/site-packages/_pytest/runner.py:338: 
-_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
-../../../../../install.20220728/lib/python3.10/site-packages/_pytest/runner.py:259: in <lambda>
-    lambda: ihook(item=item, **kwds), when=when, reraise=reraise
-../../../../../install.20220728/lib/python3.10/site-packages/pluggy/hooks.py:286: in __call__
-    return self._hookexec(self, self.get_hookimpls(), kwargs)
-../../../../../install.20220728/lib/python3.10/site-packages/pluggy/manager.py:93: in _hookexec
-    return self._inner_hookexec(hook, methods, kwargs)
-../../../../../install.20220728/lib/python3.10/site-packages/pluggy/manager.py:84: in <lambda>
-    self._inner_hookexec = lambda hook, methods, kwargs: hook.multicall(
-../../../../../install.20220728/lib/python3.10/site-packages/_pytest/unraisableexception.py:88: in pytest_runtest_call
-    yield from unraisable_exception_runtest_hook()
-_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
-
-    def unraisable_exception_runtest_hook() -> Generator[None, None, None]:
-        with catch_unraisable_exception() as cm:
-            yield
-            if cm.unraisable:
-                if cm.unraisable.err_msg is not None:
-                    err_msg = cm.unraisable.err_msg
-                else:
-                    err_msg = "Exception ignored in"
-                msg = f"{err_msg}: {cm.unraisable.object!r}\n\n"
-                msg += "".join(
-                    traceback.format_exception(
-                        cm.unraisable.exc_type,
-                        cm.unraisable.exc_value,
-                        cm.unraisable.exc_traceback,
-                    )
-                )
->               warnings.warn(pytest.PytestUnraisableExceptionWarning(msg))
-E               pytest.PytestUnraisableExceptionWarning: Exception ignored in: <function Cluster.__del__ at 0x564038596c60>
-E               
-E               Traceback (most recent call last):
-E                 File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/pluggy/callers.py", line 187, in _multicall
-E                   res = hook_impl.function(*args)
-E                 File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/_pytest/runner.py", line 174, in pytest_runtest_call
-E                   raise e
-E                 File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/_pytest/runner.py", line 166, in pytest_runtest_call
-E                   item.runtest()
-E                 File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/_pytest/python.py", line 1761, in runtest
-E                   self.ihook.pytest_pyfunc_call(pyfuncitem=self)
-E                 File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/pluggy/hooks.py", line 286, in __call__
-E                   return self._hookexec(self, self.get_hookimpls(), kwargs)
-E                 File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/pluggy/manager.py", line 93, in _hookexec
-E                   return self._inner_hookexec(hook, methods, kwargs)
-E                 File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/pluggy/manager.py", line 84, in <lambda>
-E                   self._inner_hookexec = lambda hook, methods, kwargs: hook.multicall(
-E                 File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/pluggy/callers.py", line 208, in _multicall
-E                   return outcome.get_result()
-E                 File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/pluggy/callers.py", line 80, in get_result
-E                   raise ex[1].with_traceback(ex[2])
-E                 File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/pluggy/callers.py", line 187, in _multicall
-E                   res = hook_impl.function(*args)
-E                 File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/_pytest/python.py", line 192, in pytest_pyfunc_call
-E                   result = testfunction(**testargs)
-E                 File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/contextlib.py", line 79, in inner
-E                   return func(*args, **kwds)
-E                 File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/contextlib.py", line 79, in inner
-E                   return func(*args, **kwds)
-E                 File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/utils_test.py", line 1072, in test_func
-E                   return _run_and_close_tornado(async_fn_outer)
-E                 File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/utils_test.py", line 376, in _run_and_close_tornado
-E                   return asyncio.run(inner_fn())
-E                 File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/runners.py", line 44, in run
-E                   return loop.run_until_complete(main)
-E                 File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/base_events.py", line 646, in run_until_complete
-E                   return future.result()
-E                 File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/utils_test.py", line 373, in inner_fn
-E                   return await async_fn(*args, **kwargs)
-E                 File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/utils_test.py", line 1069, in async_fn_outer
-E                   return await asyncio.wait_for(async_fn(), timeout=timeout * 2)
-E                 File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/tasks.py", line 445, in wait_for
-E                   return fut.result()
-E                 File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/utils_test.py", line 971, in async_fn
-E                   result = await coro2
-E                 File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/asyncio/tasks.py", line 445, in wait_for
-E                   return fut.result()
-E                 File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/tests/test_client.py", line 6865, in test_get_client_functions_spawn_clusters
-E                   await c.gather(c.map(f, range(2)))
-E                 File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/client.py", line 2073, in _gather
-E                   raise exception.with_traceback(traceback)
-E                 File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/tests/test_client.py", line 6850, in f
-E                   with LocalCluster(
-E                 File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/deploy/local.py", line 238, in __init__
-E                   super().__init__(
-E                 File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/deploy/spec.py", line 259, in __init__
-E                   super().__init__(
-E                 File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/deploy/cluster.py", line 69, in __init__
-E                   self._loop_runner = LoopRunner(loop=loop, asynchronous=asynchronous)
-E                 File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/utils.py", line 448, in __init__
-E                   self._loop = IOLoop()
-E                 File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/tornado/util.py", line 276, in __new__
-E                   instance.initialize(*args, **init_kwargs)
-E                 File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/tornado/platform/asyncio.py", line 339, in initialize
-E                   super().initialize(**kwargs)
-E                 File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/tornado/platform/asyncio.py", line 140, in initialize
-E                   super().initialize(**kwargs)
-E                 File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/tornado/ioloop.py", line 350, in initialize
-E                   self.make_current()
-E                 File "/home/matthew/pkgsrc/install.20220728/lib/python3.10/site-packages/tornado/platform/asyncio.py", line 353, in make_current
-E                   warnings.warn(
-E               DeprecationWarning: make_current is deprecated; start the event loop first
-E               
-E               During handling of the above exception, another exception occurred:
-E               
-E               Traceback (most recent call last):
-E                 File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/deploy/cluster.py", line 224, in __del__
-E                   _warn(f"unclosed cluster {self_r}", ResourceWarning, source=self)
-E               ResourceWarning: unclosed cluster with a broken __repr__ <distributed.deploy.local.LocalCluster object at 0x7f15300cdb80>
-
-../../../../../install.20220728/lib/python3.10/site-packages/_pytest/unraisableexception.py:78: PytestUnraisableExceptionWarning
-__________________ test_computation_object_code_dask_persist ___________________
-
-cls = <class '_pytest.runner.CallInfo'>
-func = <function call_runtest_hook.<locals>.<lambda> at 0x5640424da9f0>
-when = 'call'
-reraise = (<class '_pytest.outcomes.Exit'>, <class 'KeyboardInterrupt'>)
-
-    @classmethod
-    def from_call(
-        cls,
-        func: "Callable[[], TResult]",
-        when: "Literal['collect', 'setup', 'call', 'teardown']",
-        reraise: Optional[
-            Union[Type[BaseException], Tuple[Type[BaseException], ...]]
-        ] = None,
-    ) -> "CallInfo[TResult]":
-        """Call func, wrapping the result in a CallInfo.
-    
-        :param func:
-            The function to call. Called without arguments.
-        :param when:
-            The phase in which the function is called.
-        :param reraise:
-            Exception or exceptions that shall propagate if raised by the
-            function, instead of being wrapped in the CallInfo.
-        """
-        excinfo = None
-        start = timing.time()
-        precise_start = timing.perf_counter()
-        try:
->           result: Optional[TResult] = func()
-
-../../../../../install.20220728/lib/python3.10/site-packages/_pytest/runner.py:338: 
-_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
-../../../../../install.20220728/lib/python3.10/site-packages/_pytest/runner.py:259: in <lambda>
-    lambda: ihook(item=item, **kwds), when=when, reraise=reraise
-../../../../../install.20220728/lib/python3.10/site-packages/pluggy/hooks.py:286: in __call__
-    return self._hookexec(self, self.get_hookimpls(), kwargs)
-../../../../../install.20220728/lib/python3.10/site-packages/pluggy/manager.py:93: in _hookexec
-    return self._inner_hookexec(hook, methods, kwargs)
-../../../../../install.20220728/lib/python3.10/site-packages/pluggy/manager.py:84: in <lambda>
-    self._inner_hookexec = lambda hook, methods, kwargs: hook.multicall(
-../../../../../install.20220728/lib/python3.10/site-packages/_pytest/unraisableexception.py:88: in pytest_runtest_call
-    yield from unraisable_exception_runtest_hook()
-_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
-
-    def unraisable_exception_runtest_hook() -> Generator[None, None, None]:
-        with catch_unraisable_exception() as cm:
-            yield
-            if cm.unraisable:
-                if cm.unraisable.err_msg is not None:
-                    err_msg = cm.unraisable.err_msg
-                else:
-                    err_msg = "Exception ignored in"
-                msg = f"{err_msg}: {cm.unraisable.object!r}\n\n"
-                msg += "".join(
-                    traceback.format_exception(
-                        cm.unraisable.exc_type,
-                        cm.unraisable.exc_value,
-                        cm.unraisable.exc_traceback,
-                    )
-                )
->               warnings.warn(pytest.PytestUnraisableExceptionWarning(msg))
-E               pytest.PytestUnraisableExceptionWarning: Exception ignored in: <function Cluster.__del__ at 0x564038596c60>
-E               
-E               Traceback (most recent call last):
-E                 File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/deploy/cluster.py", line 224, in __del__
-E                   _warn(f"unclosed cluster {self_r}", ResourceWarning, source=self)
-E               ResourceWarning: unclosed cluster with a broken __repr__ <distributed.deploy.local.LocalCluster object at 0x7f153008a440>
-
-../../../../../install.20220728/lib/python3.10/site-packages/_pytest/unraisableexception.py:78: PytestUnraisableExceptionWarning
-_____________________ test_close_loop_sync_start_new_loop ______________________
-
-cleanup = None
-
-    @pytest.mark.filterwarnings("ignore:There is no current event loop:DeprecationWarning")
-    def test_close_loop_sync_start_new_loop(cleanup):
-        with _check_loop_runner():
->           _check_cluster_and_client_loop(loop=None)
-
-distributed/tests/test_client_loop.py:35: 
-_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
-distributed/tests/test_client_loop.py:24: in _check_cluster_and_client_loop
-    with LocalCluster(
-distributed/deploy/local.py:238: in __init__
-    super().__init__(
-distributed/deploy/spec.py:259: in __init__
-    super().__init__(
-distributed/deploy/cluster.py:69: in __init__
-    self._loop_runner = LoopRunner(loop=loop, asynchronous=asynchronous)
-distributed/utils.py:448: in __init__
-    self._loop = IOLoop()
-../../../../../install.20220728/lib/python3.10/site-packages/tornado/util.py:276: in __new__
-    instance.initialize(*args, **init_kwargs)
-../../../../../install.20220728/lib/python3.10/site-packages/tornado/platform/asyncio.py:339: in initialize
-    super().initialize(**kwargs)
-../../../../../install.20220728/lib/python3.10/site-packages/tornado/platform/asyncio.py:140: in initialize
-    super().initialize(**kwargs)
-../../../../../install.20220728/lib/python3.10/site-packages/tornado/ioloop.py:350: in initialize
-    self.make_current()
-_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
-
-self = <tornado.platform.asyncio.AsyncIOLoop object at 0x7f15e400f5d0>
-
-    def make_current(self) -> None:
->       warnings.warn(
-            "make_current is deprecated; start the event loop first",
-            DeprecationWarning,
-            stacklevel=2,
-        )
-E       DeprecationWarning: make_current is deprecated; start the event loop first
-
-../../../../../install.20220728/lib/python3.10/site-packages/tornado/platform/asyncio.py:353: DeprecationWarning
-____________________ test_close_loop_sync_use_running_loop _____________________
-
-cleanup = None
-
-    @pytest.mark.filterwarnings("ignore:There is no current event loop:DeprecationWarning")
-    def test_close_loop_sync_use_running_loop(cleanup):
-        with _check_loop_runner():
-            # Start own loop or use current thread's one.
->           loop_runner = LoopRunner()
-
-distributed/tests/test_client_loop.py:43: 
-_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
-distributed/utils.py:448: in __init__
-    self._loop = IOLoop()
-../../../../../install.20220728/lib/python3.10/site-packages/tornado/util.py:276: in __new__
-    instance.initialize(*args, **init_kwargs)
-../../../../../install.20220728/lib/python3.10/site-packages/tornado/platform/asyncio.py:339: in initialize
-    super().initialize(**kwargs)
-../../../../../install.20220728/lib/python3.10/site-packages/tornado/platform/asyncio.py:140: in initialize
-    super().initialize(**kwargs)
-../../../../../install.20220728/lib/python3.10/site-packages/tornado/ioloop.py:350: in initialize
-    self.make_current()
-_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
-
-self = <tornado.platform.asyncio.AsyncIOLoop object at 0x564040bd7220>
-
-    def make_current(self) -> None:
->       warnings.warn(
-            "make_current is deprecated; start the event loop first",
-            DeprecationWarning,
-            stacklevel=2,
-        )
-E       DeprecationWarning: make_current is deprecated; start the event loop first
-
-../../../../../install.20220728/lib/python3.10/site-packages/tornado/platform/asyncio.py:353: DeprecationWarning
-___________________________ test_cluster_dump_state ____________________________
-
-cls = <class '_pytest.runner.CallInfo'>
-func = <function call_runtest_hook.<locals>.<lambda> at 0x564040ddede0>
-when = 'call'
-reraise = (<class '_pytest.outcomes.Exit'>, <class 'KeyboardInterrupt'>)
-
-    @classmethod
-    def from_call(
-        cls,
-        func: "Callable[[], TResult]",
-        when: "Literal['collect', 'setup', 'call', 'teardown']",
-        reraise: Optional[
-            Union[Type[BaseException], Tuple[Type[BaseException], ...]]
-        ] = None,
-    ) -> "CallInfo[TResult]":
-        """Call func, wrapping the result in a CallInfo.
-    
-        :param func:
-            The function to call. Called without arguments.
-        :param when:
-            The phase in which the function is called.
-        :param reraise:
-            Exception or exceptions that shall propagate if raised by the
-            function, instead of being wrapped in the CallInfo.
-        """
-        excinfo = None
-        start = timing.time()
-        precise_start = timing.perf_counter()
-        try:
->           result: Optional[TResult] = func()
-
-../../../../../install.20220728/lib/python3.10/site-packages/_pytest/runner.py:338: 
-_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
-../../../../../install.20220728/lib/python3.10/site-packages/_pytest/runner.py:259: in <lambda>
-    lambda: ihook(item=item, **kwds), when=when, reraise=reraise
-../../../../../install.20220728/lib/python3.10/site-packages/pluggy/hooks.py:286: in __call__
-    return self._hookexec(self, self.get_hookimpls(), kwargs)
-../../../../../install.20220728/lib/python3.10/site-packages/pluggy/manager.py:93: in _hookexec
-    return self._inner_hookexec(hook, methods, kwargs)
-../../../../../install.20220728/lib/python3.10/site-packages/pluggy/manager.py:84: in <lambda>
-    self._inner_hookexec = lambda hook, methods, kwargs: hook.multicall(
-../../../../../install.20220728/lib/python3.10/site-packages/_pytest/unraisableexception.py:88: in pytest_runtest_call
-    yield from unraisable_exception_runtest_hook()
-_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
-
-    def unraisable_exception_runtest_hook() -> Generator[None, None, None]:
-        with catch_unraisable_exception() as cm:
-            yield
-            if cm.unraisable:
-                if cm.unraisable.err_msg is not None:
-                    err_msg = cm.unraisable.err_msg
-                else:
-                    err_msg = "Exception ignored in"
-                msg = f"{err_msg}: {cm.unraisable.object!r}\n\n"
-                msg += "".join(
-                    traceback.format_exception(
-                        cm.unraisable.exc_type,
-                        cm.unraisable.exc_value,
-                        cm.unraisable.exc_traceback,
-                    )
-                )
->               warnings.warn(pytest.PytestUnraisableExceptionWarning(msg))
-E               pytest.PytestUnraisableExceptionWarning: Exception ignored in: <function Cluster.__del__ at 0x564038596c60>
-E               
-E               Traceback (most recent call last):
-E                 File "/home/matthew/pkgsrc/work/wip/py-distributed/work/distributed-2022.8.1/distributed/deploy/cluster.py", line 224, in __del__
-E                   _warn(f"unclosed cluster {self_r}", ResourceWarning, source=self)
-E               ResourceWarning: unclosed cluster with a broken __repr__ <distributed.deploy.local.LocalCluster object at 0x56403e67d910>
-
-../../../../../install.20220728/lib/python3.10/site-packages/_pytest/unraisableexception.py:78: PytestUnraisableExceptionWarning
-______________________________ test_git_revision _______________________________
-
-    def test_git_revision() -> None:
->       assert isinstance(distributed.__git_revision__, str)
-E       assert False
-E        +  where False = isinstance(None, str)
-E        +    where None = distributed.__git_revision__
-
-distributed/tests/test_init.py:13: AssertionError
-_____________________________ test_stack_overflow ______________________________
-
-    def test_stack_overflow():
-        old = sys.getrecursionlimit()
-        sys.setrecursionlimit(200)
-        try:
-            state = create()
-            frame = None
-    
-            def f(i):
-                if i == 0:
-                    nonlocal frame
-                    frame = sys._current_frames()[threading.get_ident()]
-                    return
-                else:
-                    return f(i - 1)
-    
->           f(sys.getrecursionlimit() - 40)
-
-distributed/tests/test_profile.py:373: 
-_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-distributed/tests/test_profile.py:371: in f
-    return f(i - 1)
-_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
-
-i = 6
-
-    def f(i):
->       if i == 0:
-E       RecursionError: maximum recursion depth exceeded in comparison
-
-distributed/tests/test_profile.py:366: RecursionError
-_______________________________ test_loop_runner _______________________________
-
-loop_in_thread = <tornado.platform.asyncio.AsyncIOMainLoop object at 0x7f15ec027e30>
-
-    @pytest.mark.filterwarnings("ignore:There is no current event loop:DeprecationWarning")
-    def test_loop_runner(loop_in_thread):
-        # Implicit loop
->       loop = IOLoop()
-
-distributed/tests/test_utils.py:410: 
-_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
-../../../../../install.20220728/lib/python3.10/site-packages/tornado/util.py:276: in __new__
-    instance.initialize(*args, **init_kwargs)
-../../../../../install.20220728/lib/python3.10/site-packages/tornado/platform/asyncio.py:339: in initialize
-    super().initialize(**kwargs)
-../../../../../install.20220728/lib/python3.10/site-packages/tornado/platform/asyncio.py:140: in initialize
-    super().initialize(**kwargs)
-../../../../../install.20220728/lib/python3.10/site-packages/tornado/ioloop.py:350: in initialize
-    self.make_current()
-_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
-
-self = <tornado.platform.asyncio.AsyncIOLoop object at 0x564042f0aed0>
-
-    def make_current(self) -> None:
->       warnings.warn(
-            "make_current is deprecated; start the event loop first",
-            DeprecationWarning,
-            stacklevel=2,
-        )
-E       DeprecationWarning: make_current is deprecated; start the event loop first
-
-../../../../../install.20220728/lib/python3.10/site-packages/tornado/platform/asyncio.py:353: DeprecationWarning
-____________________________ test_two_loop_runners _____________________________
-
-loop_in_thread = <tornado.platform.asyncio.AsyncIOMainLoop object at 0x7f1584180730>
-
-    @pytest.mark.filterwarnings("ignore:There is no current event loop:DeprecationWarning")
-    def test_two_loop_runners(loop_in_thread):
-        # Loop runners tied to the same loop should cooperate
-    
-        # ABCCBA
->       loop = IOLoop()
-
-distributed/tests/test_utils.py:495: 
-_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
-../../../../../install.20220728/lib/python3.10/site-packages/tornado/util.py:276: in __new__
-    instance.initialize(*args, **init_kwargs)
-../../../../../install.20220728/lib/python3.10/site-packages/tornado/platform/asyncio.py:339: in initialize
-    super().initialize(**kwargs)
-../../../../../install.20220728/lib/python3.10/site-packages/tornado/platform/asyncio.py:140: in initialize
-    super().initialize(**kwargs)
-../../../../../install.20220728/lib/python3.10/site-packages/tornado/ioloop.py:350: in initialize
-    self.make_current()
-_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
-
-self = <tornado.platform.asyncio.AsyncIOLoop object at 0x564041f41f90>
-
-    def make_current(self) -> None:
->       warnings.warn(
-            "make_current is deprecated; start the event loop first",
-            DeprecationWarning,
-            stacklevel=2,
-        )
-E       DeprecationWarning: make_current is deprecated; start the event loop first
-
-../../../../../install.20220728/lib/python3.10/site-packages/tornado/platform/asyncio.py:353: DeprecationWarning
-============================= slowest 20 durations =============================
-10.93s call     distributed/dashboard/tests/test_scheduler_bokeh.py::test_simple
-8.22s call     distributed/cli/tests/test_dask_worker.py::test_reconnect_deprecated
-7.54s call     distributed/tests/test_scheduler.py::test_restart_nanny_timeout_exceeded
-5.89s call     distributed/tests/test_stress.py::test_cancel_stress_sync
-5.77s call     distributed/cli/tests/test_dask_scheduler.py::test_hostport
-5.54s call     distributed/tests/test_worker.py::test_tick_interval
-5.37s call     distributed/tests/test_client.py::test_performance_report
-5.31s call     distributed/tests/test_failed_workers.py::test_worker_doesnt_await_task_completion
-5.28s call     distributed/tests/test_steal.py::test_balance_with_longer_task
-5.18s call     distributed/diagnostics/tests/test_memory_sampler.py::test_pandas[False]
-4.78s call     distributed/tests/test_nanny.py::test_num_fds
-4.41s call     distributed/diagnostics/tests/test_progress.py::test_group_timing
-4.20s call     distributed/deploy/tests/test_adaptive.py::test_scale_needs_to_be_awaited
-3.97s call     distributed/tests/test_stress.py::test_cancel_stress
-3.75s call     distributed/tests/test_failed_workers.py::test_failing_worker_with_additional_replicas_on_cluster
-3.71s call     distributed/tests/test_failed_workers.py::test_restart_sync
-3.64s call     distributed/diagnostics/tests/test_progress.py::test_AllProgress
-3.55s call     distributed/tests/test_scheduler.py::test_log_tasks_during_restart
-3.36s call     distributed/tests/test_nanny.py::test_environ_plugin
-3.29s call     distributed/tests/test_client.py::test_upload_directory
-=========================== short test summary info ============================
-SKIPPED [1] distributed/cli/tests/test_dask_ssh.py:11: could not import 'paramiko': No module named 'paramiko'
-SKIPPED [1] distributed/comm/tests/test_ucx.py:11: could not import 'ucp': No module named 'ucp'
-SKIPPED [1] distributed/comm/tests/test_ucx_config.py:22: could not import 'ucp': No module named 'ucp'
-SKIPPED [1] distributed/deploy/tests/test_old_ssh.py:7: could not import 'paramiko': No module named 'paramiko'
-SKIPPED [1] distributed/deploy/tests/test_ssh.py:5: could not import 'asyncssh': No module named 'asyncssh'
-SKIPPED [1] distributed/diagnostics/tests/test_nvml.py:10: could not import 'pynvml': No module named 'pynvml'
-SKIPPED [1] distributed/protocol/tests/test_cupy.py:11: could not import 'cupy': No module named 'cupy'
-SKIPPED [1] distributed/protocol/tests/test_keras.py:5: could not import 'keras': No module named 'keras'
-SKIPPED [1] distributed/protocol/tests/test_netcdf4.py:5: could not import 'netCDF4': No module named 'netCDF4'
-SKIPPED [1] distributed/protocol/tests/test_numba.py:11: could not import 'numba.cuda': No module named 'numba'
-SKIPPED [1] distributed/protocol/tests/test_rmm.py:10: could not import 'numba.cuda': No module named 'numba'
-SKIPPED [1] distributed/protocol/tests/test_sparse.py:6: could not import 'sparse': No module named 'sparse'
-SKIPPED [1] distributed/protocol/tests/test_torch.py:8: could not import 'torch': No module named 'torch'
-SKIPPED [1] distributed/tests/test_jupyter.py:5: could not import 'jupyter_server': No module named 'jupyter_server'
-SKIPPED [1] distributed/cli/tests/test_dask_scheduler.py:462: need --runslow option to run
-SKIPPED [1] distributed/cli/tests/test_dask_scheduler.py:474: need --runslow option to run
-SKIPPED [2] distributed/cli/tests/test_dask_scheduler.py:531: need --runslow option to run
-SKIPPED [1] distributed/cli/tests/test_dask_worker.py:163: need --runslow option to run
-SKIPPED [1] distributed/cli/tests/test_dask_worker.py:187: need --runslow option to run
-SKIPPED [1] distributed/cli/tests/test_dask_worker.py:255: need --runslow option to run
-SKIPPED [1] distributed/cli/tests/test_dask_worker.py:298: need --runslow option to run
-SKIPPED [2] distributed/cli/tests/test_dask_worker.py:316: need --runslow option to run
-SKIPPED [2] distributed/cli/tests/test_dask_worker.py:336: need --runslow option to run
-SKIPPED [1] distributed/cli/tests/test_dask_worker.py:351: need --runslow option to run
-SKIPPED [1] distributed/cli/tests/test_dask_worker.py:374: need --runslow option to run
-SKIPPED [1] distributed/cli/tests/test_dask_worker.py:381: need --runslow option to run
-SKIPPED [1] distributed/cli/tests/test_dask_worker.py:389: need --runslow option to run
-SKIPPED [1] distributed/cli/tests/test_dask_worker.py:403: need --runslow option to run
-SKIPPED [4] distributed/cli/tests/test_dask_worker.py:423: need --runslow option to run
-SKIPPED [4] distributed/cli/tests/test_dask_worker.py:456: need --runslow option to run
-SKIPPED [4] distributed/cli/tests/test_dask_worker.py:492: need --runslow option to run
-SKIPPED [1] distributed/cli/tests/test_dask_worker.py:511: need --runslow option to run
-SKIPPED [2] distributed/cli/tests/test_dask_worker.py:555: need --runslow option to run
-SKIPPED [1] distributed/cli/tests/test_dask_worker.py:585: need --runslow option to run
-SKIPPED [2] distributed/cli/tests/test_dask_worker.py:595: need --runslow option to run
-SKIPPED [1] distributed/cli/tests/test_dask_worker.py:637: need --runslow option to run
-SKIPPED [2] distributed/cli/tests/test_dask_worker.py:654: need --runslow option to run
-SKIPPED [1] distributed/comm/tests/test_comms.py:338: not applicable for asyncio
-SKIPPED [5] distributed/utils_test.py:2102: could not import 'ucp': No module named 'ucp'
-SKIPPED [2] distributed/comm/tests/test_comms.py:884: Not applicable for asyncio
-SKIPPED [1] distributed/comm/tests/test_comms.py:904: Not applicable for asyncio
-SKIPPED [1] distributed/dashboard/tests/test_scheduler_bokeh.py:102: could not import 'crick': No module named 'crick'
-SKIPPED [1] distributed/dashboard/tests/test_scheduler_bokeh.py:821: need --runslow option to run
-SKIPPED [4] distributed/dashboard/tests/test_worker_bokeh.py:68: need --runslow option to run
-SKIPPED [1] distributed/dashboard/tests/test_worker_bokeh.py:101: could not import 'crick': No module named 'crick'
-SKIPPED [1] distributed/deploy/tests/test_local.py:1257: need --runslow option to run
-SKIPPED [1] distributed/deploy/tests/test_spec_cluster.py:139: need --runslow option to run
-SKIPPED [1] distributed/diagnostics/tests/test_memory_sampler.py:49: need --runslow option to run
-SKIPPED [2] distributed/diagnostics/tests/test_memory_sampler.py:108: need --runslow option to run
-SKIPPED [4] distributed/protocol/tests/test_collection_cuda.py:15: could not import 'cupy': No module named 'cupy'
-SKIPPED [4] distributed/protocol/tests/test_collection_cuda.py:47: could not import 'cudf': No module named 'cudf'
-SKIPPED [1] distributed/protocol/tests/test_numpy.py:153: could not import 'numpy.core.test_rational': No module named 'numpy.core.test_rational'
-SKIPPED [1] distributed/protocol/tests/test_numpy.py:183: need --runslow option to run
-SKIPPED [1] distributed/protocol/tests/test_protocol.py:164: need --runslow option to run
-SKIPPED [1] distributed/shuffle/tests/test_shuffle.py:118: need --runslow option to run
-SKIPPED [1] distributed/shuffle/tests/test_shuffle_extension.py:25: unconditional skip
-SKIPPED [1] distributed/shuffle/tests/test_shuffle_extension.py:43: unconditional skip
-SKIPPED [1] distributed/tests/test_active_memory_manager.py:352: need --runslow option to run
-SKIPPED [1] distributed/tests/test_active_memory_manager.py:615: need --runslow option to run
-SKIPPED [2] distributed/tests/test_active_memory_manager.py:790: need --runslow option to run
-SKIPPED [1] distributed/tests/test_active_memory_manager.py:1010: need --runslow option to run
-SKIPPED [1] distributed/tests/test_active_memory_manager.py:1088: need --runslow option to run
-SKIPPED [1] distributed/tests/test_active_memory_manager.py:1111: need --runslow option to run
-SKIPPED [2] distributed/tests/test_active_memory_manager.py:1133: need --runslow option to run
-SKIPPED [1] distributed/tests/test_actor.py:480: need --runslow option to run
-SKIPPED [1] distributed/tests/test_batched.py:158: need --runslow option to run
-SKIPPED [1] distributed/tests/test_batched.py:228: need --runslow option to run
-SKIPPED [1] distributed/tests/test_client.py:837: need --runslow option to run
-SKIPPED [1] distributed/tests/test_client.py:846: unconditional skip
-SKIPPED [1] distributed/tests/test_client.py:872: unconditional skip
-SKIPPED [1] distributed/tests/test_client.py:891: unconditional skip
-SKIPPED [1] distributed/tests/test_client.py:1637: need --runslow option to run
-SKIPPED [1] distributed/tests/test_client.py:1756: unconditional skip
-SKIPPED [1] distributed/tests/test_client.py:2002: unconditional skip
-SKIPPED [1] distributed/tests/test_client.py:2585: unconditional skip
-SKIPPED [1] distributed/tests/test_client.py:2614: Use fast random selection now
-SKIPPED [1] distributed/tests/test_client.py:3227: unconditional skip
-SKIPPED [1] distributed/tests/test_client.py:3434: need --runslow option to run
-SKIPPED [1] distributed/tests/test_client.py:3499: need --runslow option to run
-SKIPPED [2] distributed/tests/test_client.py:3725: need --runslow option to run
-SKIPPED [1] distributed/tests/test_client.py:4426: need --runslow option to run
-SKIPPED [1] distributed/tests/test_client.py:4529: Now prefer first-in-first-out
-SKIPPED [1] distributed/tests/test_client.py:4559: need --runslow option to run
-SKIPPED [1] distributed/tests/test_client.py:4944: need --runslow option to run
-SKIPPED [1] distributed/tests/test_client.py:4987: need --runslow option to run
-SKIPPED [1] distributed/tests/test_client.py:5006: need --runslow option to run
-SKIPPED [1] distributed/tests/test_client.py:5296: need --runslow option to run
-SKIPPED [1] distributed/tests/test_client.py:5550: need --runslow option to run
-SKIPPED [1] distributed/tests/test_client.py:6350: known intermittent failure
-SKIPPED [1] distributed/tests/test_client.py:6485: On Py3.10+ semaphore._loop is not bound until .acquire() blocks
-SKIPPED [1] distributed/tests/test_client.py:6505: On Py3.10+ semaphore._loop is not bound until .acquire() blocks
-SKIPPED [1] distributed/tests/test_client.py:6514: need --runslow option to run
-SKIPPED [1] distributed/tests/test_client.py:7517: need --runslow option to run
-SKIPPED [2] distributed/tests/test_client.py:7545: need --runslow option to run
-SKIPPED [1] distributed/tests/test_client_executor.py:132: need --runslow option to run
-SKIPPED [1] distributed/tests/test_config.py:370: could not import 'uvloop': No module named 'uvloop'
-SKIPPED [1] distributed/tests/test_core.py:311: need --runslow option to run
-SKIPPED [1] distributed/tests/test_core.py:587: need --runslow option to run
-SKIPPED [1] distributed/tests/test_core.py:955: could not import 'crick': No module named 'crick'
-SKIPPED [1] distributed/tests/test_core.py:964: could not import 'crick': No module named 'crick'
-SKIPPED [1] distributed/tests/test_counter.py:13: no crick library
-SKIPPED [1] distributed/tests/test_dask_collections.py:185: could not import 'sparse': No module named 'sparse'
-SKIPPED [1] distributed/tests/test_diskutils.py:224: need --runslow option to run
-SKIPPED [1] distributed/tests/test_failed_workers.py:37: need --runslow option to run
-SKIPPED [2] distributed/tests/test_failed_workers.py:48: need --runslow option to run
-SKIPPED [1] distributed/tests/test_failed_workers.py:75: need --runslow option to run
-SKIPPED [1] distributed/tests/test_failed_workers.py:86: need --runslow option to run
-SKIPPED [1] distributed/tests/test_failed_workers.py:247: need --runslow option to run
-SKIPPED [1] distributed/tests/test_failed_workers.py:323: need --runslow option to run
-SKIPPED [1] distributed/tests/test_failed_workers.py:406: need --runslow option to run
-SKIPPED [1] distributed/tests/test_failed_workers.py:418: need --runslow option to run
-SKIPPED [1] distributed/tests/test_nanny.py:96: need --runslow option to run
-SKIPPED [1] distributed/tests/test_nanny.py:110: need --runslow option to run
-SKIPPED [1] distributed/tests/test_nanny.py:142: need --runslow option to run
-SKIPPED [1] distributed/tests/test_nanny.py:266: need --runslow option to run
-SKIPPED [1] distributed/tests/test_nanny.py:406: need --runslow option to run
-SKIPPED [1] distributed/tests/test_nanny.py:571: need --runslow option to run
-SKIPPED [1] distributed/tests/test_nanny.py:579: need --runslow option to run
-SKIPPED [1] distributed/tests/test_nanny.py:614: need --runslow option to run
-SKIPPED [1] distributed/tests/test_nanny.py:657: need --runslow option to run
-SKIPPED [1] distributed/tests/test_profile.py:74: could not import 'stacktrace': No module named 'stacktrace'
-SKIPPED [1] distributed/tests/test_queues.py:91: getting same client from main thread
-SKIPPED [1] distributed/tests/test_queues.py:115: need --runslow option to run
-SKIPPED [1] distributed/tests/test_resources.py:366: Skipped
-SKIPPED [1] distributed/tests/test_resources.py:423: Should protect resource keys from optimization
-SKIPPED [1] distributed/tests/test_resources.py:444: atop fusion seemed to break this
-SKIPPED [1] distributed/tests/test_scheduler.py:636: need --runslow option to run
-SKIPPED [1] distributed/tests/test_scheduler.py:757: need --runslow option to run
-SKIPPED [1] distributed/tests/test_scheduler.py:1121: need --runslow option to run
-SKIPPED [1] distributed/tests/test_scheduler.py:1174: need --runslow option to run
-SKIPPED [1] distributed/tests/test_scheduler.py:1187: need --runslow option to run
-SKIPPED [1] distributed/tests/test_scheduler.py:2593: need --runslow option to run
-SKIPPED [1] distributed/tests/test_scheduler.py:3304: need --runslow option to run
-SKIPPED [1] distributed/tests/test_semaphore.py:132: need --runslow option to run
-SKIPPED [1] distributed/tests/test_semaphore.py:194: need --runslow option to run
-SKIPPED [1] distributed/tests/test_steal.py:256: Skipped
-SKIPPED [14] distributed/tests/test_steal.py:705: need --runslow option to run
-SKIPPED [2] distributed/tests/test_stress.py:48: need --runslow option to run
-SKIPPED [1] distributed/tests/test_stress.py:93: need --runslow option to run
-SKIPPED [1] distributed/tests/test_stress.py:170: need --runslow option to run
-SKIPPED [1] distributed/tests/test_stress.py:193: unconditional skip
-SKIPPED [1] distributed/tests/test_stress.py:219: need --runslow option to run
-SKIPPED [1] distributed/tests/test_stress.py:247: need --runslow option to run
-SKIPPED [1] distributed/tests/test_stress.py:288: need --runslow option to run
-SKIPPED [1] distributed/tests/test_utils_perf.py:86: need --runslow option to run
-SKIPPED [1] distributed/tests/test_utils_test.py:142: This hangs on travis
-SKIPPED [1] distributed/tests/test_utils_test.py:403: need --runslow option to run
-SKIPPED [1] distributed/tests/test_utils_test.py:564: need --runslow option to run
-SKIPPED [1] distributed/tests/test_utils_test.py:759: need --runslow option to run
-SKIPPED [1] distributed/tests/test_variable.py:196: need --runslow option to run
-SKIPPED [1] distributed/tests/test_worker.py:216: don't yet support uploading pyc files
-SKIPPED [1] distributed/tests/test_worker.py:306: could not import 'crick': No module named 'crick'
-SKIPPED [1] distributed/tests/test_worker.py:342: need --runslow option to run
-SKIPPED [1] distributed/tests/test_worker.py:1137: need --runslow option to run
-SKIPPED [1] distributed/tests/test_worker.py:1233: need --runslow option to run
-SKIPPED [1] distributed/tests/test_worker.py:1491: need --runslow option to run
-SKIPPED [1] distributed/tests/test_worker.py:1521: need --runslow option to run
-SKIPPED [1] distributed/tests/test_worker.py:1548: need --runslow option to run
-SKIPPED [1] distributed/tests/test_worker.py:1578: need --runslow option to run
-SKIPPED [1] distributed/tests/test_worker.py:1692: need --runslow option to run
-SKIPPED [1] distributed/tests/test_worker.py:1950: need --runslow option to run
-SKIPPED [1] distributed/tests/test_worker.py:2785: need --runslow option to run
-SKIPPED [1] distributed/tests/test_worker.py:3427: need --runslow option to run
-SKIPPED [1] distributed/tests/test_worker_memory.py:813: need --runslow option to run
-SKIPPED [2] distributed/tests/test_worker_memory.py:825: need --runslow option to run
-SKIPPED [1] distributed/tests/test_worker_memory.py:929: need --runslow option to run
-XFAIL distributed/deploy/tests/test_adaptive.py::test_adaptive_scale_down_override
-  changed API
-XFAIL distributed/protocol/tests/test_serialize.py::test_check_dask_serializable[data7-True]
-  Only checks 0th element for now.
-XFAIL distributed/shuffle/tests/test_shuffle.py::test_add_some_results
-  Don't update ongoing shuffles
-XFAIL distributed/tests/test_actor.py::test_linear_access
-  Tornado can pass things out of orderShould rely on sending small messages rather than rpc
-XFAIL distributed/tests/test_client.py::test_nested_prioritization
-  https://github.com/dask/dask/pull/6807
-XFAIL distributed/tests/test_client.py::test_annotations_survive_optimization
-  https://github.com/dask/dask/issues/7036
-XFAIL distributed/tests/test_nanny.py::test_no_unnecessary_imports_on_worker[pandas]
-  distributed#5723
-XFAIL distributed/tests/test_preload.py::test_client_preload_text
-  The preload argument to the client isn't supported yet
-XFAIL distributed/tests/test_preload.py::test_client_preload_click
-  The preload argument to the client isn't supported yet
-XFAIL distributed/tests/test_resources.py::test_collections_get[True]
-  don't track resources through optimization
-XFAIL distributed/tests/test_scheduler.py::test_rebalance_raises_missing_data3[True]
-  reason: Freeing keys and gathering data is using different
-                   channels (stream vs explicit RPC). Therefore, the
-                   partial-fail is very timing sensitive and subject to a race
-                   condition. This test assumes that the data is freed before
-                   the rebalance get_data requests come in but merely deleting
-                   the futures is not sufficient to guarantee this
-XFAIL distributed/tests/test_utils_perf.py::test_gc_diagnosis_rss_win
-  flaky and re-fails on rerun
-XFAIL distributed/tests/test_utils_test.py::test_gen_test
-  Test should always fail to ensure the body of the test function was run
-XFAIL distributed/tests/test_utils_test.py::test_gen_test_legacy_implicit
-  Test should always fail to ensure the body of the test function was run
-XFAIL distributed/tests/test_utils_test.py::test_gen_test_legacy_explicit
-  Test should always fail to ensure the body of the test function was run
-XFAIL distributed/tests/test_worker.py::test_share_communication
-  very high flakiness
-XFAIL distributed/tests/test_worker.py::test_dont_overlap_communications_to_same_worker
-  very high flakiness
-XFAIL distributed/tests/test_worker_memory.py::test_workerstate_fail_to_pickle_flight
-  https://github.com/dask/distributed/issues/6705
-XFAIL distributed/tests/test_worker_state_machine.py::test_gather_dep_failure
-  https://github.com/dask/distributed/issues/6705
-FAILED distributed/deploy/tests/test_adaptive.py::test_basic_no_loop - Deprec...
-FAILED distributed/deploy/tests/test_local.py::test_adapt_then_manual - pytes...
-FAILED distributed/deploy/tests/test_spec_cluster.py::test_loop_started_in_constructor
-FAILED distributed/tests/test_client.py::test_startup_close_startup_sync - De...
-FAILED distributed/tests/test_client.py::test_client_async_before_loop_starts
-FAILED distributed/tests/test_client.py::test_get_client_functions_spawn_clusters
-FAILED distributed/tests/test_client.py::test_computation_object_code_dask_persist
-FAILED distributed/tests/test_client_loop.py::test_close_loop_sync_start_new_loop
-FAILED distributed/tests/test_client_loop.py::test_close_loop_sync_use_running_loop
-FAILED distributed/tests/test_cluster_dump.py::test_cluster_dump_state - pyte...
-FAILED distributed/tests/test_init.py::test_git_revision - assert False
-FAILED distributed/tests/test_profile.py::test_stack_overflow - RecursionErro...
-FAILED distributed/tests/test_utils.py::test_loop_runner - DeprecationWarning...
-FAILED distributed/tests/test_utils.py::test_two_loop_runners - DeprecationWa...
-ERROR distributed/deploy/tests/test_adaptive.py::test_basic_no_loop - Asserti...
-ERROR distributed/deploy/tests/test_spec_cluster.py::test_loop_started_in_constructor
-ERROR distributed/tests/test_client_loop.py::test_close_loop_sync_start_new_loop
-= 14 failed, 2749 passed, 216 skipped, 19 xfailed, 9 xpassed, 3 errors in 1090.66s (0:18:10) =
-*** Error code 1
-
-Stop.
-bmake[1]: stopped in /home/matthew/pkgsrc/pkgsrc/wip/py-distributed
-/home/matthew/pkgsrc/install.20220728/lib/python3.10/multiprocessing/resource_tracker.py:224: UserWarning: resource_tracker: There appear to be 29 leaked semaphore objects to clean up at shutdown
-  warnings.warn('resource_tracker: There appear to be %d '
-*** Error code 1
-
-Stop.
-bmake: stopped in /home/matthew/pkgsrc/pkgsrc/wip/py-distributed


Home | Main Index | Thread Index | Old Index