vLLM support for LFM Vision Models requires a specific version. Install from the custom source below.
Install:VLLM_PRECOMPILED_WHEEL_COMMIT=72506c98349d6bcd32b4e33eec7b5513453c1502 \
VLLM_USE_PRECOMPILED=1 \
pip install git+https://github.com/vllm-project/vllm.git
pip install git+https://github.com/huggingface/transformers.git@3c2517727ce28a30f5044e01663ee204deb1cdbe pillow
Run:from vllm import LLM, SamplingParams
IMAGE_URL = "http://images.cocodataset.org/val2017/000000039769.jpg"
llm = LLM(
model="LiquidAI/LFM2-VL-3B",
max_model_len=1024,
)
sampling_params = SamplingParams(
temperature=0.0,
max_tokens=256,
)
messages = [{
"role": "user",
"content": [
{"type": "image_url", "image_url": {"url": IMAGE_URL}},
{"type": "text", "text": "Describe what you see in this image."},
],
}]
outputs = llm.chat(messages, sampling_params)
print(outputs[0].outputs[0].text)