pkgsrc-WIP-changes archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

wip/llama.cpp: Update to 0.0.2.3183



Module Name:	pkgsrc-wip
Committed By:	Ryo ONODERA <ryoon%NetBSD.org@localhost>
Pushed By:	ryoon
Date:		Wed Jun 19 21:41:01 2024 +0900
Changeset:	45461265ce2615116e91d764230bc15d6a9e9bb1

Modified Files:
	llama.cpp/Makefile
	llama.cpp/distinfo

Log Message:
wip/llama.cpp: Update to 0.0.2.3183

Changelog:
b3183:
codecov : remove (#8004)

b3182:
[SYCL] refactor (#6408)

* seperate lower precision GEMM from the main files

* fix workgroup size hardcode

b3181:
tokenizer : BPE fixes (#7530)

* Random test: add_bos_token, add_eos_token
* Random test: add BPE models for testing
* Custom regex split fails with codepoint 0
* Fix falcon punctuation regex
* Refactor llm_tokenizer_bpe: move code to constructor
* Move 'add_special_bos/eos' logic to llm_tokenizer_bpe
* Move tokenizer flags to vocab structure.
* Default values for special_add_bos/eos
* Build vocab.special_tokens_cache using vocab token types
* Generalize 'jina-v2' per token attributes
* Fix unicode whitespaces (deepseek-coder, deepseek-llm)
* Skip missing byte tokens (falcon)
* Better unicode data generation
* Replace char32_t with uint32_t

b3180:
Only use FIM middle token if it exists (#7648)

* Only use FIM middle if it exists

* Only use FIM middle if it exists

b3179:
Fix no gcc pragma on Windows (#7751)

b3178:
Allow compiling with CUDA without CUDA runtime installed (#7989)

On hosts which are not prepared/dedicated to execute code using CUDA
it is still possible to compile llama.cpp with CUDA support by just
installing the development packages.  Missing are the runtime
libraries like /usr/lib64/libcuda.so* and currently the link step
will fail.

The development environment is prepared for such situations.  There
are stub libraries for all the CUDA libraries available in the
$(CUDA_PATH)/lib64/stubs directory.  Adding this directory to the end
of the search path will not change anything for environments which
currently work fine but will enable compiling llama.cpp also in case
the runtime code is not available.

b3177:
chore: clean useless beam search param (#7985)

b3175:
ggml : sync

To see a diff of this commit:
https://wip.pkgsrc.org/cgi-bin/gitweb.cgi?p=pkgsrc-wip.git;a=commitdiff;h=45461265ce2615116e91d764230bc15d6a9e9bb1

Please note that diffs are not public domain; they are subject to the
copyright notices on the relevant files.

diffstat:
 llama.cpp/Makefile | 2 +-
 llama.cpp/distinfo | 6 +++---
 2 files changed, 4 insertions(+), 4 deletions(-)

diffs:
diff --git a/llama.cpp/Makefile b/llama.cpp/Makefile
index 22c8780582..fef1ca9567 100644
--- a/llama.cpp/Makefile
+++ b/llama.cpp/Makefile
@@ -4,7 +4,7 @@ DISTNAME=	llama.cpp-${GITHUB_TAG}
 PKGNAME=	${DISTNAME:S/-b/-0.0.2./}
 CATEGORIES=	devel
 MASTER_SITES=	${MASTER_SITE_GITHUB:=ggerganov/}
-GITHUB_TAG=	b3173
+GITHUB_TAG=	b3183
 
 MAINTAINER=	pkgsrc-users%NetBSD.org@localhost
 HOMEPAGE=	https://github.com/ggerganov/llama.cpp/
diff --git a/llama.cpp/distinfo b/llama.cpp/distinfo
index 8211b500bd..153ec61a9d 100644
--- a/llama.cpp/distinfo
+++ b/llama.cpp/distinfo
@@ -1,5 +1,5 @@
 $NetBSD$
 
-BLAKE2s (llama.cpp-b3173.tar.gz) = 74cd3a8c11a8def5f213bdb1afc209abacf89cd39522ca71bf12a6275a9909ac
-SHA512 (llama.cpp-b3173.tar.gz) = acde4758d08f4be9fafa570fcc9ab6700c556381242548a02163c01eaeac30a3680d9f097a8d4b92dcc2783049f80d4647f1c6fe4964c1ddf02303d4e3dc6abf
-Size (llama.cpp-b3173.tar.gz) = 20592194 bytes
+BLAKE2s (llama.cpp-b3183.tar.gz) = 3615f832afd058a54b4f9825bccbe44edb8bfd0b0a203036b7b472116e53afbf
+SHA512 (llama.cpp-b3183.tar.gz) = d8540af75320029dfb764fc1c0c461cefe241528d373fa12132733712af80d486ef2774f61cfddfe64efdfd50a36910c40da8986954fbed1f04f00a41edfd9ba
+Size (llama.cpp-b3183.tar.gz) = 20598797 bytes


Home | Main Index | Thread Index | Old Index