Skip to main content
← Back to Liquid Nanos LFM2-350M-ENJP-MT is a specialized translation model for near real-time bidirectional Japanese/English translation. Optimized for short-to-medium text with low latency.

Specifications

PropertyValue
Parameters350M
Context Length32K tokens
TaskMachine Translation
LanguagesEnglish ↔ Japanese

Real-time Translation

Low-latency inference

Bidirectional

EN→JP and JP→EN

Edge Deployment

Compact model size

Prompting Recipe

This model requires a specific system prompt to specify translation direction. Single-turn conversations only.
System Prompts:
  • "Translate to Japanese." β€” English β†’ Japanese
  • "Translate to English." β€” Japanese β†’ English

Quick Start

Install:
pip install transformers torch
English to Japanese:
from transformers import AutoTokenizer, AutoModelForCausalLM

model_id = "LiquidAI/LFM2-350M-ENJP-MT"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto")

messages = [
    {"role": "system", "content": "Translate to Japanese."},
    {"role": "user", "content": "What is C. elegans?"}
]

inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to(model.device)
outputs = model.generate(inputs, max_new_tokens=256)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
# Output: C. elegansγ¨γ―δ½•γ§γ™γ‹οΌŸ
Japanese to English:
messages = [
    {"role": "system", "content": "Translate to English."},
    {"role": "user", "content": "今ζ—₯γ―ε€©ζ°—γŒγ„γ„γ§γ™γ­γ€‚"}
]

inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to(model.device)
outputs = model.generate(inputs, max_new_tokens=256)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
# Output: The weather is nice today.