You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
File "/home/louis/lab/r1/src/r1/silent_thought_vllm.py", line 115, in think
output = llm.generate(prompt, sampling_params, use_tqdm=False)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/louis/lab/r1/.venv/lib/python3.12/site-packages/vllm/utils.py", line 1021, in inner
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/louis/lab/r1/.venv/lib/python3.12/site-packages/vllm/entrypoints/llm.py", line 454, in generate
self._validate_and_add_requests(
File "/home/louis/lab/r1/.venv/lib/python3.12/site-packages/vllm/entrypoints/llm.py", line 1175, in _validate_and_add_requests
self._add_request(
File "/home/louis/lab/r1/.venv/lib/python3.12/site-packages/vllm/entrypoints/llm.py", line 1193, in _add_request
self.llm_engine.add_request(
File "/home/louis/lab/r1/.venv/lib/python3.12/site-packages/vllm/v1/engine/llm_engine.py", line 163, in add_request
self.engine_core.add_request(engine_core_req)
File "/home/louis/lab/r1/.venv/lib/python3.12/site-packages/vllm/v1/engine/core_client.py", line 215, in add_request
self._send_input(EngineCoreRequestType.ADD, request)
File "/home/louis/lab/r1/.venv/lib/python3.12/site-packages/vllm/v1/engine/core_client.py", line 211, in _send_input
msg = (request_type.value, self.encoder.encode(request))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/louis/lab/r1/.venv/lib/python3.12/site-packages/vllm/v1/serial_utils.py", line 7, in encode
return pickle.dumps(obj)
^^^^^^^^^^^^^^^^^
AttributeError: Can't get local object 'adapt_tokenizer.<locals>.convert_token_to_string'
That’s great news! TBH, I’ve struggled so far to reproduce a working outlines test environment from scratch, mainly due to issues with vllm... if upgrading our dependency resolves this as a side effect, that would be nice! #1389 (comment)
Describe the issue as clearly as possible:
vLLM v1 just dropped and I tried it with some existing Outlines code, it looks like
adapt_tokenizer
brokeSteps/code to reproduce the bug:
Expected result:
(working generation!)
Error message:
Outlines/Python version information:
Version information
Context for the issue:
It just got announced in alpha, thought I should report :-)
The text was updated successfully, but these errors were encountered: