Describe the bug
Getting error 'the requested model is not a chat model' while running the code given in dummy agent library
To Reproduce
import os
from huggingface_hub import InferenceClient
HF_TOKEN = os.environ.get("HF_TOKEN")
client = InferenceClient(model="meta-llama/Llama-4-Scout-17B-16E-Instruct")
output = client.chat.completions.create(
messages=[
{"role": "user", "content": "The capital of France is"},
],
stream=False,
max_tokens=20,
)
print(output.choices[0].message.content)
getting the error as,
Bad request:
{'message': "The requested model 'meta-llama/Llama-4-Scout-17B-16E-Instruct' is not a chat model.", 'type': 'invalid_request_error', 'param': 'model', 'code': 'model_not_supported'}
Describe the bug
Getting error 'the requested model is not a chat model' while running the code given in dummy agent library
To Reproduce
getting the error as,