Update dummy_agent_library.ipynb

#86

When I execute
output = client.text_generation(
"The capital of france is",
max_new_tokens=100,
)

print(output)

with model meta-llama/Llama-3.2-3B-Instruct, it throws following error.


ValueError Traceback (most recent call last)
Cell In[5], line 3
1 # As seen in the LLM section, if we just do decoding, the model will only stop when it predicts an EOS token,
2 # and this does not happen here because this is a conversational (chat) model and we didn't apply the chat template it expects.
----> 3 output = client.text_generation(
4 "The capital of france is",
5 max_new_tokens=100,
6 )
8 print(output)

File c:\Python312\Lib\site-packages\huggingface_hub\inference_client.py:2298, in InferenceClient.text_generation(self, prompt, details, stream, model, adapter_id, best_of, decoder_input_details, do_sample, frequency_penalty, grammar, max_new_tokens, repetition_penalty, return_full_text, seed, stop, stop_sequences, temperature, top_k, top_n_tokens, top_p, truncate, typical_p, watermark)
2296 model_id = model or self.model
2297 provider_helper = get_provider_helper(self.provider, task="text-generation", model=model_id)
-> 2298 request_parameters = provider_helper.prepare_request(
2299 inputs=prompt,
2300 parameters=parameters,
2301 extra_payload={"stream": stream},
2302 headers=self.headers,
2303 model=model_id,
2304 api_key=self.token,
2305 )
2307 # Handle errors separately for more precise error messages
2308 try:

File c:\Python312\Lib\site-packages\huggingface_hub\inference_providers_common.py:67, in TaskProviderHelper.prepare_request(self, inputs, parameters, headers, model, api_key, extra_payload)
64 api_key = self._prepare_api_key(api_key)
66 # mapped model from HF model ID
---> 67 provider_mapping_info = self._prepare_mapping_info(model)
69 # default HF headers + user headers (to customize in subclasses)
70 headers = self._prepare_headers(headers, api_key)

File c:\Python312\Lib\site-packages\huggingface_hub\inference_providers_common.py:131, in TaskProviderHelper._prepare_mapping_info(self, model)
128 raise ValueError(f"Model {model} is not supported by provider {self.provider}.")
130 if provider_mapping.task != self.task:
--> 131 raise ValueError(
132 f"Model {model} is not supported for task {self.task} and provider {self.provider}. "
133 f"Supported task: {provider_mapping.task}."
134 )
136 if provider_mapping.status == "staging":
137 logger.warning(
138 f"Model {model} is in staging mode for provider {self.provider}. Meant for test purposes only."
139 )

ValueError: Model meta-llama/Llama-3.2-3B-Instruct is not supported for task text-generation and provider together. Supported task: conversational.

But when I run with meta-llama/Llama-3.1-8B-Instruct, it works.

error.jpeg

Thanks for the fix!

I try the code on Colab with "meta-llama/Llama-3.1-8B-Instruct" and it works. We should merge - Minor suggestion: changing also the InferenceClient() model parameter to "meta-llama/Llama-3.1-8B-Instruct" might we also a good idea (someone who's distracted might miss the comment)

Hugging Face Agents Course org

Thanks for the update!
We're closing this since we've made some internal changes. If the issue persists, feel free to reopen the PR.
Apologies for the inconvenience, and thanks again for your contribution!

sergiopaniego changed pull request status to closed

Sign up or log in to comment