id
stringlengths
14
16
text
stringlengths
20
3.26k
source
stringlengths
65
181
4040ca3c306a-3
dumps_kwargs (Any) – Return type unicode classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ Parameters path (Union[str, Path]) – content_type (unicode) – encoding (unicode) – proto (Protocol) – allow_pickle (bool) – Return type Model classmethod parse_obj(obj: Any) → Model¶ Parameters obj (Any) – Return type Model classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ Parameters b (Union[str, bytes]) – content_type (unicode) – encoding (unicode) – proto (Protocol) – allow_pickle (bool) – Return type Model classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ Parameters by_alias (bool) – ref_template (unicode) – Return type DictStrAny classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ Parameters by_alias (bool) – ref_template (unicode) – dumps_kwargs (Any) – Return type unicode classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. Parameters localns (Any) – Return type None classmethod validate(value: Any) → Model¶ Parameters
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.mlflow.MlflowCohereEmbeddings.html
4040ca3c306a-4
Return type None classmethod validate(value: Any) → Model¶ Parameters value (Any) – Return type Model
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.mlflow.MlflowCohereEmbeddings.html
9f61b01c04d4-0
langchain_community.embeddings.huggingface.HuggingFaceInstructEmbeddings¶ class langchain_community.embeddings.huggingface.HuggingFaceInstructEmbeddings[source]¶ Bases: BaseModel, Embeddings Wrapper around sentence_transformers embedding models. To use, you should have the sentence_transformers and InstructorEmbedding python packages installed. Example from langchain_community.embeddings import HuggingFaceInstructEmbeddings model_name = "hkunlp/instructor-large" model_kwargs = {'device': 'cpu'} encode_kwargs = {'normalize_embeddings': True} hf = HuggingFaceInstructEmbeddings( model_name=model_name, model_kwargs=model_kwargs, encode_kwargs=encode_kwargs ) Initialize the sentence_transformer. param cache_folder: Optional[str] = None¶ Path to store models. Can be also set by SENTENCE_TRANSFORMERS_HOME environment variable. param embed_instruction: str = 'Represent the document for retrieval: '¶ Instruction to use for embedding documents. param encode_kwargs: Dict[str, Any] [Optional]¶ Keyword arguments to pass when calling the encode method of the model. param model_kwargs: Dict[str, Any] [Optional]¶ Keyword arguments to pass to the model. param model_name: str = 'hkunlp/instructor-large'¶ Model name to use. param query_instruction: str = 'Represent the question for retrieving supporting documents: '¶ Instruction to use for embedding query. async aembed_documents(texts: List[str]) → List[List[float]]¶ Asynchronous Embed search docs. Parameters texts (List[str]) – Return type List[List[float]] async aembed_query(text: str) → List[float]¶ Asynchronous Embed query text. Parameters text (str) – Return type List[float]
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.huggingface.HuggingFaceInstructEmbeddings.html
9f61b01c04d4-1
Parameters text (str) – Return type List[float] classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values Parameters _fields_set (Optional[SetStr]) – values (Any) – Return type Model copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep (bool) – set to True to make a deep copy of the model self (Model) – Returns new model instance Return type Model
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.huggingface.HuggingFaceInstructEmbeddings.html
9f61b01c04d4-2
self (Model) – Returns new model instance Return type Model dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶ Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – Return type DictStrAny embed_documents(texts: List[str]) → List[List[float]][source]¶ Compute doc embeddings using a HuggingFace instruct model. Parameters texts (List[str]) – The list of texts to embed. Returns List of embeddings, one for each text. Return type List[List[float]] embed_query(text: str) → List[float][source]¶ Compute query embeddings using a HuggingFace instruct model. Parameters text (str) – The text to embed. Returns Embeddings for the text. Return type List[float] classmethod from_orm(obj: Any) → Model¶ Parameters obj (Any) – Return type Model
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.huggingface.HuggingFaceInstructEmbeddings.html
9f61b01c04d4-3
Parameters obj (Any) – Return type Model json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – encoder (Optional[Callable[[Any], Any]]) – models_as_dict (bool) – dumps_kwargs (Any) – Return type unicode classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ Parameters path (Union[str, Path]) – content_type (unicode) – encoding (unicode) – proto (Protocol) – allow_pickle (bool) – Return type Model classmethod parse_obj(obj: Any) → Model¶ Parameters obj (Any) – Return type Model
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.huggingface.HuggingFaceInstructEmbeddings.html
9f61b01c04d4-4
Parameters obj (Any) – Return type Model classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ Parameters b (Union[str, bytes]) – content_type (unicode) – encoding (unicode) – proto (Protocol) – allow_pickle (bool) – Return type Model classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ Parameters by_alias (bool) – ref_template (unicode) – Return type DictStrAny classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ Parameters by_alias (bool) – ref_template (unicode) – dumps_kwargs (Any) – Return type unicode classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. Parameters localns (Any) – Return type None classmethod validate(value: Any) → Model¶ Parameters value (Any) – Return type Model Examples using HuggingFaceInstructEmbeddings¶ Hugging Face Instruct Embeddings on Hugging Face
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.huggingface.HuggingFaceInstructEmbeddings.html
33b59943d801-0
langchain_community.embeddings.anyscale.AnyscaleEmbeddings¶ class langchain_community.embeddings.anyscale.AnyscaleEmbeddings[source]¶ Bases: OpenAIEmbeddings Anyscale Embeddings API. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param allowed_special: Union[Literal['all'], Set[str]] = {}¶ param anyscale_api_base: str = 'https://api.endpoints.anyscale.com/v1'¶ Base URL path for API requests. param anyscale_api_key: SecretStr = None¶ AnyScale Endpoints API keys. Constraints type = string writeOnly = True format = password param chunk_size: int = 1000¶ Maximum number of texts to embed in each batch param default_headers: Union[Mapping[str, str], None] = None¶ param default_query: Union[Mapping[str, object], None] = None¶ param deployment: Optional[str] = 'text-embedding-ada-002'¶ param disallowed_special: Union[Literal['all'], Set[str], Sequence[str]] = 'all'¶ param embedding_ctx_length: int = 500¶ The maximum number of tokens to embed at once. param headers: Any = None¶ param http_client: Union[Any, None] = None¶ Optional httpx.Client. param max_retries: int = 2¶ Maximum number of retries to make when generating. param model: str = 'thenlper/gte-large'¶ Model name to use. param model_kwargs: Dict[str, Any] [Optional]¶ Holds any model parameters valid for create call not explicitly specified. param openai_api_base: Optional[str] = None (alias 'base_url')¶
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.anyscale.AnyscaleEmbeddings.html
33b59943d801-1
param openai_api_base: Optional[str] = None (alias 'base_url')¶ Base URL path for API requests, leave blank if not using a proxy or service emulator. param openai_api_key: Optional[str] = None (alias 'api_key')¶ Automatically inferred from env var OPENAI_API_KEY if not provided. param openai_api_type: Optional[str] = None¶ param openai_api_version: Optional[str] = None (alias 'api_version')¶ Automatically inferred from env var OPENAI_API_VERSION if not provided. param openai_organization: Optional[str] = None (alias 'organization')¶ Automatically inferred from env var OPENAI_ORG_ID if not provided. param openai_proxy: Optional[str] = None¶ param request_timeout: Optional[Union[float, Tuple[float, float], Any]] = None (alias 'timeout')¶ Timeout for requests to OpenAI completion API. Can be float, httpx.Timeout or None. param retry_max_seconds: int = 20¶ Max number of seconds to wait between retries param retry_min_seconds: int = 4¶ Min number of seconds to wait between retries param show_progress_bar: bool = False¶ Whether to show a progress bar when embedding. param skip_empty: bool = False¶ Whether to skip empty strings when embedding or raise an error. Defaults to not skipping. param tiktoken_enabled: bool = False¶ Set this to False for non-OpenAI implementations of the embeddings API param tiktoken_model_name: Optional[str] = None¶ The model name to pass to tiktoken when using this class. Tiktoken is used to count the number of tokens in documents to constrain them to be under a certain limit. By default, when set to None, this will
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.anyscale.AnyscaleEmbeddings.html
33b59943d801-2
them to be under a certain limit. By default, when set to None, this will be the same as the embedding model name. However, there are some cases where you may want to use this Embedding class with a model name not supported by tiktoken. This can include when using Azure embeddings or when using one of the many model providers that expose an OpenAI-like API but with different models. In those cases, in order to avoid erroring when tiktoken is called, you can specify a model name to use here. async aembed_documents(texts: List[str], chunk_size: Optional[int] = 0) → List[List[float]]¶ Call out to OpenAI’s embedding endpoint async for embedding search docs. Parameters texts (List[str]) – The list of texts to embed. chunk_size (Optional[int]) – The chunk size of embeddings. If None, will use the chunk size specified by the class. Returns List of embeddings, one for each text. Return type List[List[float]] async aembed_query(text: str) → List[float]¶ Call out to OpenAI’s embedding endpoint async for embedding query text. Parameters text (str) – The text to embed. Returns Embedding for the text. Return type List[float] classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values Parameters _fields_set (Optional[SetStr]) – values (Any) – Return type Model
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.anyscale.AnyscaleEmbeddings.html
33b59943d801-3
values (Any) – Return type Model copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep (bool) – set to True to make a deep copy of the model self (Model) – Returns new model instance Return type Model dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶ Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) –
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.anyscale.AnyscaleEmbeddings.html
33b59943d801-4
exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – Return type DictStrAny embed_documents(texts: List[str], chunk_size: Optional[int] = 0) → List[List[float]]¶ Call out to OpenAI’s embedding endpoint for embedding search docs. Parameters texts (List[str]) – The list of texts to embed. chunk_size (Optional[int]) – The chunk size of embeddings. If None, will use the chunk size specified by the class. Returns List of embeddings, one for each text. Return type List[List[float]] embed_query(text: str) → List[float]¶ Call out to OpenAI’s embedding endpoint for embedding query text. Parameters text (str) – The text to embed. Returns Embedding for the text. Return type List[float] classmethod from_orm(obj: Any) → Model¶ Parameters obj (Any) – Return type Model json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) –
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.anyscale.AnyscaleEmbeddings.html
33b59943d801-5
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – encoder (Optional[Callable[[Any], Any]]) – models_as_dict (bool) – dumps_kwargs (Any) – Return type unicode classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ Parameters path (Union[str, Path]) – content_type (unicode) – encoding (unicode) – proto (Protocol) – allow_pickle (bool) – Return type Model classmethod parse_obj(obj: Any) → Model¶ Parameters obj (Any) – Return type Model classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ Parameters b (Union[str, bytes]) – content_type (unicode) – encoding (unicode) – proto (Protocol) – allow_pickle (bool) – Return type Model classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ Parameters by_alias (bool) – ref_template (unicode) – Return type DictStrAny
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.anyscale.AnyscaleEmbeddings.html
33b59943d801-6
ref_template (unicode) – Return type DictStrAny classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ Parameters by_alias (bool) – ref_template (unicode) – dumps_kwargs (Any) – Return type unicode classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. Parameters localns (Any) – Return type None classmethod validate(value: Any) → Model¶ Parameters value (Any) – Return type Model property lc_secrets: Dict[str, str]¶ Examples using AnyscaleEmbeddings¶ Anyscale
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.anyscale.AnyscaleEmbeddings.html
3952c8314930-0
langchain_community.embeddings.aleph_alpha.AlephAlphaSymmetricSemanticEmbedding¶ class langchain_community.embeddings.aleph_alpha.AlephAlphaSymmetricSemanticEmbedding[source]¶ Bases: AlephAlphaAsymmetricSemanticEmbedding Symmetric version of the Aleph Alpha’s semantic embeddings. The main difference is that here, both the documents and queries are embedded with a SemanticRepresentation.Symmetric .. rubric:: Example from aleph_alpha import AlephAlphaSymmetricSemanticEmbedding embeddings = AlephAlphaAsymmetricSemanticEmbedding( normalize=True, compress_to_size=128 ) text = "This is a test text" doc_result = embeddings.embed_documents([text]) query_result = embeddings.embed_query(text) Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param aleph_alpha_api_key: Optional[str] = None¶ API key for Aleph Alpha API. param compress_to_size: Optional[int] = None¶ Should the returned embeddings come back as an original 5120-dim vector, or should it be compressed to 128-dim. param contextual_control_threshold: Optional[int] = None¶ Attention control parameters only apply to those tokens that have explicitly been set in the request. param control_log_additive: bool = True¶ Apply controls on prompt items by adding the log(control_factor) to attention scores. param host: str = 'https://api.aleph-alpha.com'¶ The hostname of the API host. The default one is “https://api.aleph-alpha.com”) param hosting: Optional[str] = None¶ Determines in which datacenters the request may be processed.
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.aleph_alpha.AlephAlphaSymmetricSemanticEmbedding.html
3952c8314930-1
Determines in which datacenters the request may be processed. You can either set the parameter to “aleph-alpha” or omit it (defaulting to None). Not setting this value, or setting it to None, gives us maximal flexibility in processing your request in our own datacenters and on servers hosted with other providers. Choose this option for maximal availability. Setting it to “aleph-alpha” allows us to only process the request in our own datacenters. Choose this option for maximal data privacy. param model: str = 'luminous-base'¶ Model name to use. param nice: bool = False¶ Setting this to True, will signal to the API that you intend to be nice to other users by de-prioritizing your request below concurrent ones. param normalize: bool = False¶ Should returned embeddings be normalized param request_timeout_seconds: int = 305¶ Client timeout that will be set for HTTP requests in the requests library’s API calls. Server will close all requests after 300 seconds with an internal server error. param total_retries: int = 8¶ The number of retries made in case requests fail with certain retryable status codes. If the last retry fails a corresponding exception is raised. Note, that between retries an exponential backoff is applied, starting with 0.5 s after the first retry and doubling for each retry made. So with the default setting of 8 retries a total wait time of 63.5 s is added between the retries. async aembed_documents(texts: List[str]) → List[List[float]]¶ Asynchronous Embed search docs. Parameters texts (List[str]) – Return type List[List[float]] async aembed_query(text: str) → List[float]¶ Asynchronous Embed query text. Parameters
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.aleph_alpha.AlephAlphaSymmetricSemanticEmbedding.html
3952c8314930-2
Asynchronous Embed query text. Parameters text (str) – Return type List[float] classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values Parameters _fields_set (Optional[SetStr]) – values (Any) – Return type Model copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep (bool) – set to True to make a deep copy of the model self (Model) – Returns new model instance Return type Model
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.aleph_alpha.AlephAlphaSymmetricSemanticEmbedding.html
3952c8314930-3
self (Model) – Returns new model instance Return type Model dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶ Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – Return type DictStrAny embed_documents(texts: List[str]) → List[List[float]][source]¶ Call out to Aleph Alpha’s Document endpoint. Parameters texts (List[str]) – The list of texts to embed. Returns List of embeddings, one for each text. Return type List[List[float]] embed_query(text: str) → List[float][source]¶ Call out to Aleph Alpha’s asymmetric, query embedding endpoint :param text: The text to embed. Returns Embeddings for the text. Parameters text (str) – Return type List[float] classmethod from_orm(obj: Any) → Model¶ Parameters obj (Any) – Return type Model
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.aleph_alpha.AlephAlphaSymmetricSemanticEmbedding.html
3952c8314930-4
Parameters obj (Any) – Return type Model json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – encoder (Optional[Callable[[Any], Any]]) – models_as_dict (bool) – dumps_kwargs (Any) – Return type unicode classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ Parameters path (Union[str, Path]) – content_type (unicode) – encoding (unicode) – proto (Protocol) – allow_pickle (bool) – Return type Model classmethod parse_obj(obj: Any) → Model¶ Parameters obj (Any) – Return type Model
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.aleph_alpha.AlephAlphaSymmetricSemanticEmbedding.html
3952c8314930-5
Parameters obj (Any) – Return type Model classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ Parameters b (Union[str, bytes]) – content_type (unicode) – encoding (unicode) – proto (Protocol) – allow_pickle (bool) – Return type Model classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ Parameters by_alias (bool) – ref_template (unicode) – Return type DictStrAny classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ Parameters by_alias (bool) – ref_template (unicode) – dumps_kwargs (Any) – Return type unicode classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. Parameters localns (Any) – Return type None classmethod validate(value: Any) → Model¶ Parameters value (Any) – Return type Model Examples using AlephAlphaSymmetricSemanticEmbedding¶ Aleph Alpha
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.aleph_alpha.AlephAlphaSymmetricSemanticEmbedding.html
e613cec46a49-0
langchain_community.embeddings.sambanova.SambaStudioEmbeddings¶ class langchain_community.embeddings.sambanova.SambaStudioEmbeddings[source]¶ Bases: BaseModel, Embeddings SambaNova embedding models. To use, you should have the environment variables SAMBASTUDIO_EMBEDDINGS_BASE_URL, SAMBASTUDIO_EMBEDDINGS_PROJECT_ID, SAMBASTUDIO_EMBEDDINGS_ENDPOINT_ID, SAMBASTUDIO_EMBEDDINGS_API_KEY, set with your personal sambastudio variable or pass it as a named parameter to the constructor. Example from langchain_community.embeddings import SambaStudioEmbeddings embeddings = SambaStudioEmbeddings(sambastudio_embeddings_base_url=base_url, sambastudio_embeddings_project_id=project_id, sambastudio_embeddings_endpoint_id=endpoint_id, sambastudio_embeddings_api_key=api_key) (or) embeddings = SambaStudioEmbeddings() Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param API_BASE_PATH = '/api/predict/nlp/'¶ Base path to use for the API usage param sambastudio_embeddings_api_key: str = ''¶ sambastudio api key param sambastudio_embeddings_base_url: str = ''¶ Base url to use param sambastudio_embeddings_endpoint_id: str = ''¶ endpoint id on sambastudio for model param sambastudio_embeddings_project_id: str = ''¶ Project id on sambastudio for model async aembed_documents(texts: List[str]) → List[List[float]]¶ Asynchronous Embed search docs. Parameters texts (List[str]) – Return type List[List[float]]
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.sambanova.SambaStudioEmbeddings.html
e613cec46a49-1
Parameters texts (List[str]) – Return type List[List[float]] async aembed_query(text: str) → List[float]¶ Asynchronous Embed query text. Parameters text (str) – Return type List[float] classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values Parameters _fields_set (Optional[SetStr]) – values (Any) – Return type Model copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep (bool) – set to True to make a deep copy of the model self (Model) – Returns new model instance Return type Model
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.sambanova.SambaStudioEmbeddings.html
e613cec46a49-2
self (Model) – Returns new model instance Return type Model dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶ Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – Return type DictStrAny embed_documents(texts: List[str], batch_size: int = 32) → List[List[float]][source]¶ Returns a list of embeddings for the given sentences. :param texts: List of texts to encode :type texts: List[str] :param batch_size: Batch size for the encoding :type batch_size: int Returns List of embeddings for the given sentences Return type List[np.ndarray] or List[tensor] Parameters texts (List[str]) – batch_size (int) – embed_query(text: str) → List[float][source]¶ Returns a list of embeddings for the given sentences. :param sentences: List of sentences to encode :type sentences: List[str] Returns List of embeddings for the given sentences Return type List[np.ndarray] or List[tensor] Parameters text (str) –
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.sambanova.SambaStudioEmbeddings.html
e613cec46a49-3
List[np.ndarray] or List[tensor] Parameters text (str) – classmethod from_orm(obj: Any) → Model¶ Parameters obj (Any) – Return type Model json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – encoder (Optional[Callable[[Any], Any]]) – models_as_dict (bool) – dumps_kwargs (Any) – Return type unicode classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ Parameters path (Union[str, Path]) – content_type (unicode) – encoding (unicode) – proto (Protocol) – allow_pickle (bool) – Return type
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.sambanova.SambaStudioEmbeddings.html
e613cec46a49-4
proto (Protocol) – allow_pickle (bool) – Return type Model classmethod parse_obj(obj: Any) → Model¶ Parameters obj (Any) – Return type Model classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ Parameters b (Union[str, bytes]) – content_type (unicode) – encoding (unicode) – proto (Protocol) – allow_pickle (bool) – Return type Model classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ Parameters by_alias (bool) – ref_template (unicode) – Return type DictStrAny classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ Parameters by_alias (bool) – ref_template (unicode) – dumps_kwargs (Any) – Return type unicode classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. Parameters localns (Any) – Return type None classmethod validate(value: Any) → Model¶ Parameters value (Any) – Return type Model Examples using SambaStudioEmbeddings¶ SambaNova
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.sambanova.SambaStudioEmbeddings.html
fad2dd68bf06-0
langchain_community.embeddings.text2vec.Text2vecEmbeddings¶ class langchain_community.embeddings.text2vec.Text2vecEmbeddings[source]¶ Bases: Embeddings, BaseModel text2vec embedding models. Install text2vec first, run ‘pip install -U text2vec’. The gitbub repository for text2vec is : https://github.com/shibing624/text2vec Example from langchain_community.embeddings.text2vec import Text2vecEmbeddings embedding = Text2vecEmbeddings() embedding.embed_documents([ "This is a CoSENT(Cosine Sentence) model.", "It maps sentences to a 768 dimensional dense vector space.", ]) embedding.embed_query( "It can be used for text matching or semantic search." ) param device: Optional[str] = None¶ param encoder_type: Any = 'MEAN'¶ param max_seq_length: int = 256¶ param model: Any = None¶ param model_name_or_path: Optional[str] = None¶ async aembed_documents(texts: List[str]) → List[List[float]]¶ Asynchronous Embed search docs. Parameters texts (List[str]) – Return type List[List[float]] async aembed_query(text: str) → List[float]¶ Asynchronous Embed query text. Parameters text (str) – Return type List[float] classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values Parameters _fields_set (Optional[SetStr]) –
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.text2vec.Text2vecEmbeddings.html
fad2dd68bf06-1
Parameters _fields_set (Optional[SetStr]) – values (Any) – Return type Model copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep (bool) – set to True to make a deep copy of the model self (Model) – Returns new model instance Return type Model dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶ Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) –
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.text2vec.Text2vecEmbeddings.html
fad2dd68bf06-2
by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – Return type DictStrAny embed_documents(texts: List[str]) → List[List[float]][source]¶ Embed documents using the text2vec embeddings model. Parameters texts (List[str]) – The list of texts to embed. Returns List of embeddings, one for each text. Return type List[List[float]] embed_query(text: str) → List[float][source]¶ Embed a query using the text2vec embeddings model. Parameters text (str) – The text to embed. Returns Embeddings for the text. Return type List[float] classmethod from_orm(obj: Any) → Model¶ Parameters obj (Any) – Return type Model json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) –
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.text2vec.Text2vecEmbeddings.html
fad2dd68bf06-3
by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – encoder (Optional[Callable[[Any], Any]]) – models_as_dict (bool) – dumps_kwargs (Any) – Return type unicode classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ Parameters path (Union[str, Path]) – content_type (unicode) – encoding (unicode) – proto (Protocol) – allow_pickle (bool) – Return type Model classmethod parse_obj(obj: Any) → Model¶ Parameters obj (Any) – Return type Model classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ Parameters b (Union[str, bytes]) – content_type (unicode) – encoding (unicode) – proto (Protocol) – allow_pickle (bool) – Return type Model classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ Parameters by_alias (bool) – ref_template (unicode) – Return type DictStrAny classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ Parameters by_alias (bool) – ref_template (unicode) – dumps_kwargs (Any) – Return type unicode
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.text2vec.Text2vecEmbeddings.html
fad2dd68bf06-4
dumps_kwargs (Any) – Return type unicode classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. Parameters localns (Any) – Return type None classmethod validate(value: Any) → Model¶ Parameters value (Any) – Return type Model
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.text2vec.Text2vecEmbeddings.html
f69ee8978fcb-0
langchain_nomic.embeddings.NomicEmbeddings¶ class langchain_nomic.embeddings.NomicEmbeddings(*, model: str, nomic_api_key: Optional[str] = ..., dimensionality: Optional[int] = ..., inference_mode: Literal['remote'] = ...)[source]¶ class langchain_nomic.embeddings.NomicEmbeddings(*, model: str, nomic_api_key: Optional[str] = ..., dimensionality: Optional[int] = ..., inference_mode: Literal['local', 'dynamic'], device: Optional[str] = ...) class langchain_nomic.embeddings.NomicEmbeddings(*, model: str, nomic_api_key: Optional[str] = ..., dimensionality: Optional[int] = ..., inference_mode: str, device: Optional[str] = ...) NomicEmbeddings embedding model. Example from langchain_nomic import NomicEmbeddings model = NomicEmbeddings() Initialize NomicEmbeddings model. Parameters model (str) – model name nomic_api_key (Optional[str]) – optionally, set the Nomic API key. Uses the NOMIC_API_KEY environment variable by default. dimensionality (Optional[int]) – The embedding dimension, for use with Matryoshka-capable models. Defaults to full-size. inference_mode (str) – How to generate embeddings. One of remote, local (Embed4All), or dynamic (automatic). Defaults to remote. device (Optional[str]) – The device to use for local embeddings. Choices include cpu, gpu, nvidia, amd, or a specific device name. See the docstring for GPT4All.__init__ for more info. Typically defaults to CPU. Do not use on macOS. vision_model (Optional[str]) – Methods __init__() Initialize NomicEmbeddings model. aembed_documents(texts) Asynchronous Embed search docs.
https://api.python.langchain.com/en/latest/embeddings/langchain_nomic.embeddings.NomicEmbeddings.html
f69ee8978fcb-1
aembed_documents(texts) Asynchronous Embed search docs. aembed_query(text) Asynchronous Embed query text. embed(texts, *, task_type) Embed texts. embed_documents(texts) Embed search docs. embed_image(uris) embed_query(text) Embed query text. __init__(*, model: str, nomic_api_key: Optional[str] = None, dimensionality: Optional[int] = None, inference_mode: Literal['remote'] = 'remote')[source]¶ __init__(*, model: str, nomic_api_key: Optional[str] = None, dimensionality: Optional[int] = None, inference_mode: Literal['local', 'dynamic'], device: Optional[str] = None) __init__(*, model: str, nomic_api_key: Optional[str] = None, dimensionality: Optional[int] = None, inference_mode: str, device: Optional[str] = None) Initialize NomicEmbeddings model. Parameters model – model name nomic_api_key – optionally, set the Nomic API key. Uses the NOMIC_API_KEY environment variable by default. dimensionality – The embedding dimension, for use with Matryoshka-capable models. Defaults to full-size. inference_mode – How to generate embeddings. One of remote, local (Embed4All), or dynamic (automatic). Defaults to remote. device – The device to use for local embeddings. Choices include cpu, gpu, nvidia, amd, or a specific device name. See the docstring for GPT4All.__init__ for more info. Typically defaults to CPU. Do not use on macOS. async aembed_documents(texts: List[str]) → List[List[float]]¶ Asynchronous Embed search docs. Parameters texts (List[str]) – Return type List[List[float]]
https://api.python.langchain.com/en/latest/embeddings/langchain_nomic.embeddings.NomicEmbeddings.html
f69ee8978fcb-2
Parameters texts (List[str]) – Return type List[List[float]] async aembed_query(text: str) → List[float]¶ Asynchronous Embed query text. Parameters text (str) – Return type List[float] embed(texts: List[str], *, task_type: str) → List[List[float]][source]¶ Embed texts. Parameters texts (List[str]) – list of texts to embed task_type (str) – the task type to use when embedding. One of search_query, search_document, classification, clustering Return type List[List[float]] embed_documents(texts: List[str]) → List[List[float]][source]¶ Embed search docs. Parameters texts (List[str]) – list of texts to embed as documents Return type List[List[float]] embed_image(uris: List[str]) → List[List[float]][source]¶ Parameters uris (List[str]) – Return type List[List[float]] embed_query(text: str) → List[float][source]¶ Embed query text. Parameters text (str) – query text Return type List[float]
https://api.python.langchain.com/en/latest/embeddings/langchain_nomic.embeddings.NomicEmbeddings.html
51b927f3d218-0
langchain_community.embeddings.self_hosted.SelfHostedEmbeddings¶ class langchain_community.embeddings.self_hosted.SelfHostedEmbeddings[source]¶ Bases: SelfHostedPipeline, Embeddings Custom embedding models on self-hosted remote hardware. Supported hardware includes auto-launched instances on AWS, GCP, Azure, and Lambda, as well as servers specified by IP address and SSH credentials (such as on-prem, or another cloud like Paperspace, Coreweave, etc.). To use, you should have the runhouse python package installed. Example using a model load function:from langchain_community.embeddings import SelfHostedEmbeddings from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline import runhouse as rh gpu = rh.cluster(name="rh-a10x", instance_type="A100:1") def get_pipeline(): model_id = "facebook/bart-large" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) return pipeline("feature-extraction", model=model, tokenizer=tokenizer) embeddings = SelfHostedEmbeddings( model_load_fn=get_pipeline, hardware=gpu model_reqs=["./", "torch", "transformers"], ) Example passing in a pipeline path:from langchain_community.embeddings import SelfHostedHFEmbeddings import runhouse as rh from transformers import pipeline gpu = rh.cluster(name="rh-a10x", instance_type="A100:1") pipeline = pipeline(model="bert-base-uncased", task="feature-extraction") rh.blob(pickle.dumps(pipeline), path="models/pipeline.pkl").save().to(gpu, path="models") embeddings = SelfHostedHFEmbeddings.from_pipeline(
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.self_hosted.SelfHostedEmbeddings.html
51b927f3d218-1
embeddings = SelfHostedHFEmbeddings.from_pipeline( pipeline="models/pipeline.pkl", hardware=gpu, model_reqs=["./", "torch", "transformers"], ) Init the pipeline with an auxiliary function. The load function must be in global scope to be imported and run on the server, i.e. in a module and not a REPL or closure. Then, initialize the remote inference function. param allow_dangerous_deserialization: bool = False¶ Allow deserialization using pickle which can be dangerous if loading compromised data. param cache: Union[BaseCache, bool, None] = None¶ Whether to cache the response. If true, will use the global cache. If false, will not use a cache If None, will use the global cache if it’s set, otherwise no cache. If instance of BaseCache, will use the provided cache. Caching is not currently supported for streaming methods of models. param callback_manager: Optional[BaseCallbackManager] = None¶ [DEPRECATED] param callbacks: Callbacks = None¶ Callbacks to add to the run trace. param custom_get_token_ids: Optional[Callable[[str], List[int]]] = None¶ Optional encoder to use for counting tokens. param hardware: Any = None¶ Remote hardware to send the inference function to. param inference_fn: Callable = <function _embed_documents>¶ Inference function to extract the embeddings on the remote hardware. param inference_kwargs: Any = None¶ Any kwargs to pass to the model’s inference function. param load_fn_kwargs: Optional[dict] = None¶ Keyword arguments to pass to the model load function. param metadata: Optional[Dict[str, Any]] = None¶ Metadata to add to the run trace. param model_load_fn: Callable [Required]¶
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.self_hosted.SelfHostedEmbeddings.html
51b927f3d218-2
Metadata to add to the run trace. param model_load_fn: Callable [Required]¶ Function to load the model remotely on the server. param model_reqs: List[str] = ['./', 'torch']¶ Requirements to install on hardware to inference the model. param tags: Optional[List[str]] = None¶ Tags to add to the run trace. param verbose: bool [Optional]¶ Whether to print out response text. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶ [Deprecated] Check Cache and run the LLM on the given prompt and input. Notes Deprecated since version langchain-core==0.1.7: Use invoke instead. Parameters prompt (str) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]) – tags (Optional[List[str]]) – metadata (Optional[Dict[str, Any]]) – kwargs (Any) – Return type str async abatch(inputs: List[Union[PromptValue, str, Sequence[Union[BaseMessage, List[str], Tuple[str, str], str, Dict[str, Any]]]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Any) → List[str]¶ Default implementation runs ainvoke in parallel using asyncio.gather. The default implementation of batch works well for IO bound runnables. Subclasses should override this method if they can batch more efficiently;
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.self_hosted.SelfHostedEmbeddings.html
51b927f3d218-3
Subclasses should override this method if they can batch more efficiently; e.g., if the underlying runnable uses an API which supports a batch mode. Parameters inputs (List[Union[PromptValue, str, Sequence[Union[BaseMessage, List[str], Tuple[str, str], str, Dict[str, Any]]]]]) – config (Optional[Union[RunnableConfig, List[RunnableConfig]]]) – return_exceptions (bool) – kwargs (Any) – Return type List[str] async abatch_as_completed(inputs: Sequence[Input], config: Optional[Union[RunnableConfig, Sequence[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → AsyncIterator[Tuple[int, Union[Output, Exception]]]¶ Run ainvoke in parallel on a list of inputs, yielding results as they complete. Parameters inputs (Sequence[Input]) – config (Optional[Union[RunnableConfig, Sequence[RunnableConfig]]]) – return_exceptions (bool) – kwargs (Optional[Any]) – Return type AsyncIterator[Tuple[int, Union[Output, Exception]]] async aembed_documents(texts: List[str]) → List[List[float]]¶ Asynchronous Embed search docs. Parameters texts (List[str]) – Return type List[List[float]] async aembed_query(text: str) → List[float]¶ Asynchronous Embed query text. Parameters text (str) – Return type List[float]
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.self_hosted.SelfHostedEmbeddings.html
51b927f3d218-4
Parameters text (str) – Return type List[float] async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, run_name: Optional[Union[str, List[str]]] = None, run_id: Optional[Union[UUID, List[Optional[UUID]]]] = None, **kwargs: Any) → LLMResult¶ Asynchronously pass a sequence of prompts to a model and return generations. This method should make use of batched calls for models that expose a batched API. Use this method when you want to: take advantage of batched calls, need more output from the model than just the top generated value, are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models). Parameters prompts (List[str]) – List of string prompts. stop (Optional[List[str]]) – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. callbacks (Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]]) – Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation. **kwargs (Any) – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. tags (Optional[Union[List[str], List[List[str]]]]) –
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.self_hosted.SelfHostedEmbeddings.html
51b927f3d218-5
tags (Optional[Union[List[str], List[List[str]]]]) – metadata (Optional[Union[Dict[str, Any], List[Dict[str, Any]]]]) – run_name (Optional[Union[str, List[str]]]) – run_id (Optional[Union[UUID, List[Optional[UUID]]]]) – **kwargs – Returns An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output. Return type LLMResult async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶ Asynchronously pass a sequence of prompts and return model generations. This method should make use of batched calls for models that expose a batched API. Use this method when you want to: take advantage of batched calls, need more output from the model than just the top generated value, are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models). Parameters prompts (List[PromptValue]) – List of PromptValues. A PromptValue is an object that can be converted to match the format of any language model (string for pure text generation models and BaseMessages for chat models). stop (Optional[List[str]]) – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. callbacks (Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]]) – Callbacks to pass through. Used for executing additional
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.self_hosted.SelfHostedEmbeddings.html
51b927f3d218-6
functionality, such as logging or streaming, throughout generation. **kwargs (Any) – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output. Return type LLMResult async ainvoke(input: Union[PromptValue, str, Sequence[Union[BaseMessage, List[str], Tuple[str, str], str, Dict[str, Any]]]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶ Default implementation of ainvoke, calls invoke from a thread. The default implementation allows usage of async code even if the runnable did not implement a native async version of invoke. Subclasses should override this method if they can run asynchronously. Parameters input (Union[PromptValue, str, Sequence[Union[BaseMessage, List[str], Tuple[str, str], str, Dict[str, Any]]]]) – config (Optional[RunnableConfig]) – stop (Optional[List[str]]) – kwargs (Any) – Return type str async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ [Deprecated] Notes Deprecated since version langchain-core==0.1.7: Use ainvoke instead. Parameters text (str) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type str async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ [Deprecated] Notes Deprecated since version langchain-core==0.1.7: Use ainvoke instead. Parameters
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.self_hosted.SelfHostedEmbeddings.html
51b927f3d218-7
Parameters messages (List[BaseMessage]) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type BaseMessage assign(**kwargs: Union[Runnable[Dict[str, Any], Any], Callable[[Dict[str, Any]], Any], Mapping[str, Union[Runnable[Dict[str, Any], Any], Callable[[Dict[str, Any]], Any]]]]) → RunnableSerializable[Any, Any]¶ Assigns new fields to the dict output of this runnable. Returns a new runnable. from langchain_community.llms.fake import FakeStreamingListLLM from langchain_core.output_parsers import StrOutputParser from langchain_core.prompts import SystemMessagePromptTemplate from langchain_core.runnables import Runnable from operator import itemgetter prompt = ( SystemMessagePromptTemplate.from_template("You are a nice assistant.") + "{question}" ) llm = FakeStreamingListLLM(responses=["foo-lish"]) chain: Runnable = prompt | llm | {"str": StrOutputParser()} chain_with_assign = chain.assign(hello=itemgetter("str") | llm) print(chain_with_assign.input_schema.schema()) # {'title': 'PromptInput', 'type': 'object', 'properties': {'question': {'title': 'Question', 'type': 'string'}}} print(chain_with_assign.output_schema.schema()) # {'title': 'RunnableSequenceOutput', 'type': 'object', 'properties': {'str': {'title': 'Str', 'type': 'string'}, 'hello': {'title': 'Hello', 'type': 'string'}}} Parameters kwargs (Union[Runnable[Dict[str, Any], Any], Callable[[Dict[str, Any]], Any], Mapping[str, Union[Runnable[Dict[str, Any], Any], Callable[[Dict[str, Any]], Any]]]]) –
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.self_hosted.SelfHostedEmbeddings.html
51b927f3d218-8
Return type RunnableSerializable[Any, Any] async astream(input: Union[PromptValue, str, Sequence[Union[BaseMessage, List[str], Tuple[str, str], str, Dict[str, Any]]]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → AsyncIterator[str]¶ Default implementation of astream, which calls ainvoke. Subclasses should override this method if they support streaming output. Parameters input (Union[PromptValue, str, Sequence[Union[BaseMessage, List[str], Tuple[str, str], str, Dict[str, Any]]]]) – config (Optional[RunnableConfig]) – stop (Optional[List[str]]) – kwargs (Any) – Return type AsyncIterator[str] astream_events(input: Any, config: Optional[RunnableConfig] = None, *, version: Literal['v1', 'v2'], include_names: Optional[Sequence[str]] = None, include_types: Optional[Sequence[str]] = None, include_tags: Optional[Sequence[str]] = None, exclude_names: Optional[Sequence[str]] = None, exclude_types: Optional[Sequence[str]] = None, exclude_tags: Optional[Sequence[str]] = None, **kwargs: Any) → AsyncIterator[StreamEvent]¶ [Beta] Generate a stream of events. Use to create an iterator over StreamEvents that provide real-time information about the progress of the runnable, including StreamEvents from intermediate results. A StreamEvent is a dictionary with the following schema: event: str - Event names are of theformat: on_[runnable_type]_(start|stream|end). name: str - The name of the runnable that generated the event. run_id: str - randomly generated ID associated with the given execution ofthe runnable that emitted the event.
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.self_hosted.SelfHostedEmbeddings.html
51b927f3d218-9
A child runnable that gets invoked as part of the execution of a parent runnable is assigned its own unique ID. parent_ids: List[str] - The IDs of the parent runnables thatgenerated the event. The root runnable will have an empty list. The order of the parent IDs is from the root to the immediate parent. Only available for v2 version of the API. The v1 version of the API will return an empty list. tags: Optional[List[str]] - The tags of the runnable that generatedthe event. metadata: Optional[Dict[str, Any]] - The metadata of the runnablethat generated the event. data: Dict[str, Any] Below is a table that illustrates some evens that might be emitted by various chains. Metadata fields have been omitted from the table for brevity. Chain definitions have been included after the table. ATTENTION This reference table is for the V2 version of the schema. event name chunk input output on_chat_model_start [model name] {“messages”: [[SystemMessage, HumanMessage]]} on_chat_model_stream [model name] AIMessageChunk(content=”hello”) on_chat_model_end [model name] {“messages”: [[SystemMessage, HumanMessage]]} AIMessageChunk(content=”hello world”) on_llm_start [model name] {‘input’: ‘hello’} on_llm_stream [model name] ‘Hello’ on_llm_end [model name] ‘Hello human!’ on_chain_start format_docs on_chain_stream format_docs “hello world!, goodbye world!” on_chain_end format_docs [Document(…)] “hello world!, goodbye world!” on_tool_start some_tool {“x”: 1, “y”: “2”} on_tool_end some_tool
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.self_hosted.SelfHostedEmbeddings.html
51b927f3d218-10
on_tool_end some_tool {“x”: 1, “y”: “2”} on_retriever_start [retriever name] {“query”: “hello”} on_retriever_end [retriever name] {“query”: “hello”} [Document(…), ..] on_prompt_start [template_name] {“question”: “hello”} on_prompt_end [template_name] {“question”: “hello”} ChatPromptValue(messages: [SystemMessage, …]) Here are declarations associated with the events shown above: format_docs: def format_docs(docs: List[Document]) -> str: '''Format the docs.''' return ", ".join([doc.page_content for doc in docs]) format_docs = RunnableLambda(format_docs) some_tool: @tool def some_tool(x: int, y: str) -> dict: '''Some_tool.''' return {"x": x, "y": y} prompt: template = ChatPromptTemplate.from_messages( [("system", "You are Cat Agent 007"), ("human", "{question}")] ).with_config({"run_name": "my_template", "tags": ["my_template"]}) Example: from langchain_core.runnables import RunnableLambda async def reverse(s: str) -> str: return s[::-1] chain = RunnableLambda(func=reverse) events = [ event async for event in chain.astream_events("hello", version="v2") ] # will produce the following events (run_id, and parent_ids # has been omitted for brevity): [ { "data": {"input": "hello"}, "event": "on_chain_start", "metadata": {},
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.self_hosted.SelfHostedEmbeddings.html
51b927f3d218-11
"event": "on_chain_start", "metadata": {}, "name": "reverse", "tags": [], }, { "data": {"chunk": "olleh"}, "event": "on_chain_stream", "metadata": {}, "name": "reverse", "tags": [], }, { "data": {"output": "olleh"}, "event": "on_chain_end", "metadata": {}, "name": "reverse", "tags": [], }, ] Parameters input (Any) – The input to the runnable. config (Optional[RunnableConfig]) – The config to use for the runnable. version (Literal['v1', 'v2']) – The version of the schema to use either v2 or v1. Users should use v2. v1 is for backwards compatibility and will be deprecated in 0.4.0. No default will be assigned until the API is stabilized. include_names (Optional[Sequence[str]]) – Only include events from runnables with matching names. include_types (Optional[Sequence[str]]) – Only include events from runnables with matching types. include_tags (Optional[Sequence[str]]) – Only include events from runnables with matching tags. exclude_names (Optional[Sequence[str]]) – Exclude events from runnables with matching names. exclude_types (Optional[Sequence[str]]) – Exclude events from runnables with matching types. exclude_tags (Optional[Sequence[str]]) – Exclude events from runnables with matching tags. kwargs (Any) – Additional keyword arguments to pass to the runnable. These will be passed to astream_log as this implementation of astream_events is built on top of astream_log. Returns
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.self_hosted.SelfHostedEmbeddings.html
51b927f3d218-12
of astream_events is built on top of astream_log. Returns An async stream of StreamEvents. Return type AsyncIterator[StreamEvent] Notes async astream_log(input: Any, config: Optional[RunnableConfig] = None, *, diff: bool = True, with_streamed_output_list: bool = True, include_names: Optional[Sequence[str]] = None, include_types: Optional[Sequence[str]] = None, include_tags: Optional[Sequence[str]] = None, exclude_names: Optional[Sequence[str]] = None, exclude_types: Optional[Sequence[str]] = None, exclude_tags: Optional[Sequence[str]] = None, **kwargs: Any) → Union[AsyncIterator[RunLogPatch], AsyncIterator[RunLog]]¶ Stream all output from a runnable, as reported to the callback system. This includes all inner runs of LLMs, Retrievers, Tools, etc. Output is streamed as Log objects, which include a list of jsonpatch ops that describe how the state of the run has changed in each step, and the final state of the run. The jsonpatch ops can be applied in order to construct state. Parameters input (Any) – The input to the runnable. config (Optional[RunnableConfig]) – The config to use for the runnable. diff (bool) – Whether to yield diffs between each step, or the current state. with_streamed_output_list (bool) – Whether to yield the streamed_output list. include_names (Optional[Sequence[str]]) – Only include logs with these names. include_types (Optional[Sequence[str]]) – Only include logs with these types. include_tags (Optional[Sequence[str]]) – Only include logs with these tags. exclude_names (Optional[Sequence[str]]) – Exclude logs with these names. exclude_types (Optional[Sequence[str]]) – Exclude logs with these types.
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.self_hosted.SelfHostedEmbeddings.html
51b927f3d218-13
exclude_types (Optional[Sequence[str]]) – Exclude logs with these types. exclude_tags (Optional[Sequence[str]]) – Exclude logs with these tags. kwargs (Any) – Return type Union[AsyncIterator[RunLogPatch], AsyncIterator[RunLog]] async atransform(input: AsyncIterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶ Default implementation of atransform, which buffers input and calls astream. Subclasses should override this method if they can start producing output while input is still being generated. Parameters input (AsyncIterator[Input]) – config (Optional[RunnableConfig]) – kwargs (Optional[Any]) – Return type AsyncIterator[Output] batch(inputs: List[Union[PromptValue, str, Sequence[Union[BaseMessage, List[str], Tuple[str, str], str, Dict[str, Any]]]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Any) → List[str]¶ Default implementation runs invoke in parallel using a thread pool executor. The default implementation of batch works well for IO bound runnables. Subclasses should override this method if they can batch more efficiently; e.g., if the underlying runnable uses an API which supports a batch mode. Parameters inputs (List[Union[PromptValue, str, Sequence[Union[BaseMessage, List[str], Tuple[str, str], str, Dict[str, Any]]]]]) – config (Optional[Union[RunnableConfig, List[RunnableConfig]]]) – return_exceptions (bool) – kwargs (Any) – Return type List[str]
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.self_hosted.SelfHostedEmbeddings.html
51b927f3d218-14
kwargs (Any) – Return type List[str] batch_as_completed(inputs: Sequence[Input], config: Optional[Union[RunnableConfig, Sequence[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → Iterator[Tuple[int, Union[Output, Exception]]]¶ Run invoke in parallel on a list of inputs, yielding results as they complete. Parameters inputs (Sequence[Input]) – config (Optional[Union[RunnableConfig, Sequence[RunnableConfig]]]) – return_exceptions (bool) – kwargs (Optional[Any]) – Return type Iterator[Tuple[int, Union[Output, Exception]]] bind(**kwargs: Any) → Runnable[Input, Output]¶ Bind arguments to a Runnable, returning a new Runnable. Useful when a runnable in a chain requires an argument that is not in the output of the previous runnable or included in the user input. Example: from langchain_community.chat_models import ChatOllama from langchain_core.output_parsers import StrOutputParser llm = ChatOllama(model='llama2') # Without bind. chain = ( llm | StrOutputParser() ) chain.invoke("Repeat quoted words exactly: 'One two three four five.'") # Output is 'One two three four five.' # With bind. chain = ( llm.bind(stop=["three"]) | StrOutputParser() ) chain.invoke("Repeat quoted words exactly: 'One two three four five.'") # Output is 'One two' Parameters kwargs (Any) – Return type Runnable[Input, Output] config_schema(*, include: Optional[Sequence[str]] = None) → Type[BaseModel]¶
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.self_hosted.SelfHostedEmbeddings.html
51b927f3d218-15
The type of config this runnable accepts specified as a pydantic model. To mark a field as configurable, see the configurable_fields and configurable_alternatives methods. Parameters include (Optional[Sequence[str]]) – A list of fields to include in the config schema. Returns A pydantic model that can be used to validate config. Return type Type[BaseModel] configurable_alternatives(which: ConfigurableField, *, default_key: str = 'default', prefix_keys: bool = False, **kwargs: Union[Runnable[Input, Output], Callable[[], Runnable[Input, Output]]]) → RunnableSerializable[Input, Output]¶ Configure alternatives for runnables that can be set at runtime. from langchain_anthropic import ChatAnthropic from langchain_core.runnables.utils import ConfigurableField from langchain_openai import ChatOpenAI model = ChatAnthropic( model_name="claude-3-sonnet-20240229" ).configurable_alternatives( ConfigurableField(id="llm"), default_key="anthropic", openai=ChatOpenAI() ) # uses the default model ChatAnthropic print(model.invoke("which organization created you?").content) # uses ChatOpenAI print( model.with_config( configurable={"llm": "openai"} ).invoke("which organization created you?").content ) Parameters which (ConfigurableField) – default_key (str) – prefix_keys (bool) – kwargs (Union[Runnable[Input, Output], Callable[[], Runnable[Input, Output]]]) – Return type RunnableSerializable[Input, Output]
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.self_hosted.SelfHostedEmbeddings.html
51b927f3d218-16
Return type RunnableSerializable[Input, Output] configurable_fields(**kwargs: Union[ConfigurableField, ConfigurableFieldSingleOption, ConfigurableFieldMultiOption]) → RunnableSerializable[Input, Output]¶ Configure particular runnable fields at runtime. from langchain_core.runnables import ConfigurableField from langchain_openai import ChatOpenAI model = ChatOpenAI(max_tokens=20).configurable_fields( max_tokens=ConfigurableField( id="output_token_number", name="Max tokens in the output", description="The maximum number of tokens in the output", ) ) # max_tokens = 20 print( "max_tokens_20: ", model.invoke("tell me something about chess").content ) # max_tokens = 200 print("max_tokens_200: ", model.with_config( configurable={"output_token_number": 200} ).invoke("tell me something about chess").content ) Parameters kwargs (Union[ConfigurableField, ConfigurableFieldSingleOption, ConfigurableFieldMultiOption]) – Return type RunnableSerializable[Input, Output] classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values Parameters _fields_set (Optional[SetStr]) – values (Any) – Return type Model
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.self_hosted.SelfHostedEmbeddings.html
51b927f3d218-17
values (Any) – Return type Model copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep (bool) – set to True to make a deep copy of the model self (Model) – Returns new model instance Return type Model dict(**kwargs: Any) → Dict¶ Return a dictionary of the LLM. Parameters kwargs (Any) – Return type Dict embed_documents(texts: List[str]) → List[List[float]][source]¶ Compute doc embeddings using a HuggingFace transformer model. Parameters texts (List[str]) – The list of texts to embed.s Returns List of embeddings, one for each text. Return type List[List[float]] embed_query(text: str) → List[float][source]¶ Compute query embeddings using a HuggingFace transformer model. Parameters text (str) – The text to embed. Returns Embeddings for the text. Return type List[float] classmethod from_orm(obj: Any) → Model¶ Parameters obj (Any) – Return type
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.self_hosted.SelfHostedEmbeddings.html
51b927f3d218-18
Parameters obj (Any) – Return type Model classmethod from_pipeline(pipeline: Any, hardware: Any, model_reqs: Optional[List[str]] = None, device: int = 0, **kwargs: Any) → LLM¶ Init the SelfHostedPipeline from a pipeline object or string. Parameters pipeline (Any) – hardware (Any) – model_reqs (Optional[List[str]]) – device (int) – kwargs (Any) – Return type LLM generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, run_name: Optional[Union[str, List[str]]] = None, run_id: Optional[Union[UUID, List[Optional[UUID]]]] = None, **kwargs: Any) → LLMResult¶ Pass a sequence of prompts to a model and return generations. This method should make use of batched calls for models that expose a batched API. Use this method when you want to: take advantage of batched calls, need more output from the model than just the top generated value, are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models). Parameters prompts (List[str]) – List of string prompts. stop (Optional[List[str]]) – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings.
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.self_hosted.SelfHostedEmbeddings.html
51b927f3d218-19
first occurrence of any of these substrings. callbacks (Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]]) – Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation. **kwargs (Any) – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. tags (Optional[Union[List[str], List[List[str]]]]) – metadata (Optional[Union[Dict[str, Any], List[Dict[str, Any]]]]) – run_name (Optional[Union[str, List[str]]]) – run_id (Optional[Union[UUID, List[Optional[UUID]]]]) – **kwargs – Returns An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output. Return type LLMResult generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶ Pass a sequence of prompts to the model and return model generations. This method should make use of batched calls for models that expose a batched API. Use this method when you want to: take advantage of batched calls, need more output from the model than just the top generated value, are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models). Parameters prompts (List[PromptValue]) – List of PromptValues. A PromptValue is an object that can be converted to match the format of any language model (string for pure
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.self_hosted.SelfHostedEmbeddings.html
51b927f3d218-20
converted to match the format of any language model (string for pure text generation models and BaseMessages for chat models). stop (Optional[List[str]]) – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. callbacks (Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]]) – Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation. **kwargs (Any) – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output. Return type LLMResult get_graph(config: Optional[RunnableConfig] = None) → Graph¶ Return a graph representation of this runnable. Parameters config (Optional[RunnableConfig]) – Return type Graph get_input_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶ Get a pydantic model that can be used to validate input to the runnable. Runnables that leverage the configurable_fields and configurable_alternatives methods will have a dynamic input schema that depends on which configuration the runnable is invoked with. This method allows to get an input schema for a specific configuration. Parameters config (Optional[RunnableConfig]) – A config to use when generating the schema. Returns A pydantic model that can be used to validate input. Return type Type[BaseModel] classmethod get_lc_namespace() → List[str]¶ Get the namespace of the langchain object. For example, if the class is langchain.llms.openai.OpenAI, then the
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.self_hosted.SelfHostedEmbeddings.html
51b927f3d218-21
For example, if the class is langchain.llms.openai.OpenAI, then the namespace is [“langchain”, “llms”, “openai”] Return type List[str] get_name(suffix: Optional[str] = None, *, name: Optional[str] = None) → str¶ Get the name of the runnable. Parameters suffix (Optional[str]) – name (Optional[str]) – Return type str get_num_tokens(text: str) → int¶ Get the number of tokens present in the text. Useful for checking if an input will fit in a model’s context window. Parameters text (str) – The string input to tokenize. Returns The integer number of tokens in the text. Return type int get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶ Get the number of tokens in the messages. Useful for checking if an input will fit in a model’s context window. Parameters messages (List[BaseMessage]) – The message inputs to tokenize. Returns The sum of the number of tokens across the messages. Return type int get_output_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶ Get a pydantic model that can be used to validate output to the runnable. Runnables that leverage the configurable_fields and configurable_alternatives methods will have a dynamic output schema that depends on which configuration the runnable is invoked with. This method allows to get an output schema for a specific configuration. Parameters config (Optional[RunnableConfig]) – A config to use when generating the schema. Returns A pydantic model that can be used to validate output. Return type Type[BaseModel] get_prompts(config: Optional[RunnableConfig] = None) → List[BasePromptTemplate]¶
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.self_hosted.SelfHostedEmbeddings.html
51b927f3d218-22
Parameters config (Optional[RunnableConfig]) – Return type List[BasePromptTemplate] get_token_ids(text: str) → List[int]¶ Return the ordered ids of the tokens in a text. Parameters text (str) – The string input to tokenize. Returns A list of ids corresponding to the tokens in the text, in order they occurin the text. Return type List[int] invoke(input: Union[PromptValue, str, Sequence[Union[BaseMessage, List[str], Tuple[str, str], str, Dict[str, Any]]]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶ Transform a single input into an output. Override to implement. Parameters input (Union[PromptValue, str, Sequence[Union[BaseMessage, List[str], Tuple[str, str], str, Dict[str, Any]]]]) – The input to the runnable. config (Optional[RunnableConfig]) – A config to use when invoking the runnable. The config supports standard keys like ‘tags’, ‘metadata’ for tracing purposes, ‘max_concurrency’ for controlling how much work to do in parallel, and other keys. Please refer to the RunnableConfig for more details. stop (Optional[List[str]]) – kwargs (Any) – Returns The output of the runnable. Return type str classmethod is_lc_serializable() → bool¶ Is this class serializable? Return type bool
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.self_hosted.SelfHostedEmbeddings.html
51b927f3d218-23
Is this class serializable? Return type bool json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – encoder (Optional[Callable[[Any], Any]]) – models_as_dict (bool) – dumps_kwargs (Any) – Return type unicode classmethod lc_id() → List[str]¶ A unique identifier for this class for serialization purposes. The unique identifier is a list of strings that describes the path to the object. Return type List[str] map() → Runnable[List[Input], List[Output]]¶ Return a new Runnable that maps a list of inputs to a list of outputs, by calling invoke() with each input. Example from langchain_core.runnables import RunnableLambda def _lambda(x: int) -> int: return x + 1
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.self_hosted.SelfHostedEmbeddings.html
51b927f3d218-24
def _lambda(x: int) -> int: return x + 1 runnable = RunnableLambda(_lambda) print(runnable.map().invoke([1, 2, 3])) # [2, 3, 4] Return type Runnable[List[Input], List[Output]] classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ Parameters path (Union[str, Path]) – content_type (unicode) – encoding (unicode) – proto (Protocol) – allow_pickle (bool) – Return type Model classmethod parse_obj(obj: Any) → Model¶ Parameters obj (Any) – Return type Model classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ Parameters b (Union[str, bytes]) – content_type (unicode) – encoding (unicode) – proto (Protocol) – allow_pickle (bool) – Return type Model pick(keys: Union[str, List[str]]) → RunnableSerializable[Any, Any]¶ Pick keys from the dict output of this runnable. Pick single key:import json from langchain_core.runnables import RunnableLambda, RunnableMap as_str = RunnableLambda(str) as_json = RunnableLambda(json.loads) chain = RunnableMap(str=as_str, json=as_json) chain.invoke("[1, 2, 3]") # -> {"str": "[1, 2, 3]", "json": [1, 2, 3]}
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.self_hosted.SelfHostedEmbeddings.html
51b927f3d218-25
json_only_chain = chain.pick("json") json_only_chain.invoke("[1, 2, 3]") # -> [1, 2, 3] Pick list of keys:from typing import Any import json from langchain_core.runnables import RunnableLambda, RunnableMap as_str = RunnableLambda(str) as_json = RunnableLambda(json.loads) def as_bytes(x: Any) -> bytes: return bytes(x, "utf-8") chain = RunnableMap( str=as_str, json=as_json, bytes=RunnableLambda(as_bytes) ) chain.invoke("[1, 2, 3]") # -> {"str": "[1, 2, 3]", "json": [1, 2, 3], "bytes": b"[1, 2, 3]"} json_and_bytes_chain = chain.pick(["json", "bytes"]) json_and_bytes_chain.invoke("[1, 2, 3]") # -> {"json": [1, 2, 3], "bytes": b"[1, 2, 3]"} Parameters keys (Union[str, List[str]]) – Return type RunnableSerializable[Any, Any] pipe(*others: Union[Runnable[Any, Other], Callable[[Any], Other]], name: Optional[str] = None) → RunnableSerializable[Input, Other]¶ Compose this Runnable with Runnable-like objects to make a RunnableSequence. Equivalent to RunnableSequence(self, *others) or self | others[0] | … Example from langchain_core.runnables import RunnableLambda def add_one(x: int) -> int: return x + 1 def mul_two(x: int) -> int: return x * 2 runnable_1 = RunnableLambda(add_one)
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.self_hosted.SelfHostedEmbeddings.html
51b927f3d218-26
return x * 2 runnable_1 = RunnableLambda(add_one) runnable_2 = RunnableLambda(mul_two) sequence = runnable_1.pipe(runnable_2) # Or equivalently: # sequence = runnable_1 | runnable_2 # sequence = RunnableSequence(first=runnable_1, last=runnable_2) sequence.invoke(1) await sequence.ainvoke(1) # -> 4 sequence.batch([1, 2, 3]) await sequence.abatch([1, 2, 3]) # -> [4, 6, 8] Parameters others (Union[Runnable[Any, Other], Callable[[Any], Other]]) – name (Optional[str]) – Return type RunnableSerializable[Input, Other] predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ [Deprecated] Notes Deprecated since version langchain-core==0.1.7: Use invoke instead. Parameters text (str) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type str predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ [Deprecated] Notes Deprecated since version langchain-core==0.1.7: Use invoke instead. Parameters messages (List[BaseMessage]) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type BaseMessage save(file_path: Union[Path, str]) → None¶ Save the LLM. Parameters file_path (Union[Path, str]) – Path to file to save the LLM to. Return type None Example:
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.self_hosted.SelfHostedEmbeddings.html
51b927f3d218-27
Return type None Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ Parameters by_alias (bool) – ref_template (unicode) – Return type DictStrAny classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ Parameters by_alias (bool) – ref_template (unicode) – dumps_kwargs (Any) – Return type unicode stream(input: Union[PromptValue, str, Sequence[Union[BaseMessage, List[str], Tuple[str, str], str, Dict[str, Any]]]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → Iterator[str]¶ Default implementation of stream, which calls invoke. Subclasses should override this method if they support streaming output. Parameters input (Union[PromptValue, str, Sequence[Union[BaseMessage, List[str], Tuple[str, str], str, Dict[str, Any]]]]) – config (Optional[RunnableConfig]) – stop (Optional[List[str]]) – kwargs (Any) – Return type Iterator[str] to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ Serialize the runnable to JSON. Return type Union[SerializedConstructor, SerializedNotImplemented] to_json_not_implemented() → SerializedNotImplemented¶ Return type SerializedNotImplemented transform(input: Iterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶ Default implementation of transform, which buffers input and then calls stream.
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.self_hosted.SelfHostedEmbeddings.html
51b927f3d218-28
Default implementation of transform, which buffers input and then calls stream. Subclasses should override this method if they can start producing output while input is still being generated. Parameters input (Iterator[Input]) – config (Optional[RunnableConfig]) – kwargs (Optional[Any]) – Return type Iterator[Output] classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. Parameters localns (Any) – Return type None classmethod validate(value: Any) → Model¶ Parameters value (Any) – Return type Model with_alisteners(*, on_start: Optional[AsyncListener] = None, on_end: Optional[AsyncListener] = None, on_error: Optional[AsyncListener] = None) → Runnable[Input, Output]¶ Bind asynchronous lifecycle listeners to a Runnable, returning a new Runnable. on_start: Asynchronously called before the runnable starts running. on_end: Asynchronously called after the runnable finishes running. on_error: Asynchronously called if the runnable throws an error. The Run object contains information about the run, including its id, type, input, output, error, start_time, end_time, and any tags or metadata added to the run. Example: Parameters on_start (Optional[AsyncListener]) – on_end (Optional[AsyncListener]) – on_error (Optional[AsyncListener]) – Return type Runnable[Input, Output] with_config(config: Optional[RunnableConfig] = None, **kwargs: Any) → Runnable[Input, Output]¶ Bind config to a Runnable, returning a new Runnable. Parameters config (Optional[RunnableConfig]) – kwargs (Any) – Return type
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.self_hosted.SelfHostedEmbeddings.html
51b927f3d218-29
config (Optional[RunnableConfig]) – kwargs (Any) – Return type Runnable[Input, Output] with_fallbacks(fallbacks: Sequence[Runnable[Input, Output]], *, exceptions_to_handle: Tuple[Type[BaseException], ...] = (<class 'Exception'>,), exception_key: Optional[str] = None) → RunnableWithFallbacksT[Input, Output]¶ Add fallbacks to a runnable, returning a new Runnable. Example from typing import Iterator from langchain_core.runnables import RunnableGenerator def _generate_immediate_error(input: Iterator) -> Iterator[str]: raise ValueError() yield "" def _generate(input: Iterator) -> Iterator[str]: yield from "foo bar" runnable = RunnableGenerator(_generate_immediate_error).with_fallbacks( [RunnableGenerator(_generate)] ) print(''.join(runnable.stream({}))) #foo bar Parameters fallbacks (Sequence[Runnable[Input, Output]]) – A sequence of runnables to try if the original runnable fails. exceptions_to_handle (Tuple[Type[BaseException], ...]) – A tuple of exception types to handle. exception_key (Optional[str]) – If string is specified then handled exceptions will be passed to fallbacks as part of the input under the specified key. If None, exceptions will not be passed to fallbacks. If used, the base runnable and its fallbacks must accept a dictionary as input. Returns A new Runnable that will try the original runnable, and then each fallback in order, upon failures. Return type RunnableWithFallbacksT[Input, Output]
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.self_hosted.SelfHostedEmbeddings.html
51b927f3d218-30
Return type RunnableWithFallbacksT[Input, Output] with_listeners(*, on_start: Optional[Union[Callable[[Run], None], Callable[[Run, RunnableConfig], None]]] = None, on_end: Optional[Union[Callable[[Run], None], Callable[[Run, RunnableConfig], None]]] = None, on_error: Optional[Union[Callable[[Run], None], Callable[[Run, RunnableConfig], None]]] = None) → Runnable[Input, Output]¶ Bind lifecycle listeners to a Runnable, returning a new Runnable. on_start: Called before the runnable starts running, with the Run object. on_end: Called after the runnable finishes running, with the Run object. on_error: Called if the runnable throws an error, with the Run object. The Run object contains information about the run, including its id, type, input, output, error, start_time, end_time, and any tags or metadata added to the run. Example: from langchain_core.runnables import RunnableLambda from langchain_core.tracers.schemas import Run import time def test_runnable(time_to_sleep : int): time.sleep(time_to_sleep) def fn_start(run_obj: Run): print("start_time:", run_obj.start_time) def fn_end(run_obj: Run): print("end_time:", run_obj.end_time) chain = RunnableLambda(test_runnable).with_listeners( on_start=fn_start, on_end=fn_end ) chain.invoke(2) Parameters on_start (Optional[Union[Callable[[Run], None], Callable[[Run, RunnableConfig], None]]]) – on_end (Optional[Union[Callable[[Run], None], Callable[[Run, RunnableConfig], None]]]) –
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.self_hosted.SelfHostedEmbeddings.html
51b927f3d218-31
on_error (Optional[Union[Callable[[Run], None], Callable[[Run, RunnableConfig], None]]]) – Return type Runnable[Input, Output] with_retry(*, retry_if_exception_type: ~typing.Tuple[~typing.Type[BaseException], ...] = (<class 'Exception'>,), wait_exponential_jitter: bool = True, stop_after_attempt: int = 3) → Runnable[Input, Output]¶ Create a new Runnable that retries the original runnable on exceptions. Example: from langchain_core.runnables import RunnableLambda count = 0 def _lambda(x: int) -> None: global count count = count + 1 if x == 1: raise ValueError("x is 1") else: pass runnable = RunnableLambda(_lambda) try: runnable.with_retry( stop_after_attempt=2, retry_if_exception_type=(ValueError,), ).invoke(1) except ValueError: pass assert (count == 2) Parameters retry_if_exception_type (Tuple[Type[BaseException], ...]) – A tuple of exception types to retry on wait_exponential_jitter (bool) – Whether to add jitter to the wait time between retries stop_after_attempt (int) – The maximum number of attempts to make before giving up Returns A new Runnable that retries the original runnable on exceptions. Return type Runnable[Input, Output] with_structured_output(schema: Union[Dict, Type[BaseModel]], **kwargs: Any) → Runnable[Union[PromptValue, str, Sequence[Union[BaseMessage, List[str], Tuple[str, str], str, Dict[str, Any]]]], Union[Dict, BaseModel]]¶ Not implemented on this class. Parameters
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.self_hosted.SelfHostedEmbeddings.html
51b927f3d218-32
Not implemented on this class. Parameters schema (Union[Dict, Type[BaseModel]]) – kwargs (Any) – Return type Runnable[Union[PromptValue, str, Sequence[Union[BaseMessage, List[str], Tuple[str, str], str, Dict[str, Any]]]], Union[Dict, BaseModel]] with_types(*, input_type: Optional[Type[Input]] = None, output_type: Optional[Type[Output]] = None) → Runnable[Input, Output]¶ Bind input and output types to a Runnable, returning a new Runnable. Parameters input_type (Optional[Type[Input]]) – output_type (Optional[Type[Output]]) – Return type Runnable[Input, Output] property InputType: TypeAlias¶ Get the input type for this runnable. property OutputType: Type[str]¶ Get the input type for this runnable. property config_specs: List[ConfigurableFieldSpec]¶ List configurable fields for this runnable. property input_schema: Type[BaseModel]¶ The type of input this runnable accepts specified as a pydantic model. property lc_attributes: Dict¶ List of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_secrets: Dict[str, str]¶ A map of constructor argument names to secret ids. For example,{“openai_api_key”: “OPENAI_API_KEY”} name: Optional[str] = None¶ The name of the runnable. Used for debugging and tracing. property output_schema: Type[BaseModel]¶ The type of output this runnable produces specified as a pydantic model. Examples using SelfHostedEmbeddings¶ Self Hosted
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.self_hosted.SelfHostedEmbeddings.html
0b6d14972f8b-0
langchain_community.embeddings.elasticsearch.ElasticsearchEmbeddings¶ class langchain_community.embeddings.elasticsearch.ElasticsearchEmbeddings(client: MlClient, model_id: str, *, input_field: str = 'text_field')[source]¶ [Deprecated] Elasticsearch embedding models. This class provides an interface to generate embeddings using a model deployed in an Elasticsearch cluster. It requires an Elasticsearch connection object and the model_id of the model deployed in the cluster. In Elasticsearch you need to have an embedding model loaded and deployed. - https://www.elastic.co/guide/en/elasticsearch/reference/current/infer-trained-model.html - https://www.elastic.co/guide/en/machine-learning/current/ml-nlp-deploy-models.html Notes Deprecated since version 0.1.11: Use Use class in langchain-elasticsearch package instead. Initialize the ElasticsearchEmbeddings instance. Parameters client (MlClient) – An Elasticsearch ML client object. model_id (str) – The model_id of the model deployed in the Elasticsearch cluster. input_field (str) – The name of the key for the input text field in the document. Defaults to ‘text_field’. Methods __init__(client, model_id, *[, input_field]) Initialize the ElasticsearchEmbeddings instance. aembed_documents(texts) Asynchronous Embed search docs. aembed_query(text) Asynchronous Embed query text. embed_documents(texts) Generate embeddings for a list of documents. embed_query(text) Generate an embedding for a single query text. from_credentials(model_id, *[, es_cloud_id, ...]) Instantiate embeddings from Elasticsearch credentials. from_es_connection(model_id, es_connection) Instantiate embeddings from an existing Elasticsearch connection.
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.elasticsearch.ElasticsearchEmbeddings.html
0b6d14972f8b-1
from_es_connection(model_id, es_connection) Instantiate embeddings from an existing Elasticsearch connection. __init__(client: MlClient, model_id: str, *, input_field: str = 'text_field')[source]¶ Initialize the ElasticsearchEmbeddings instance. Parameters client (MlClient) – An Elasticsearch ML client object. model_id (str) – The model_id of the model deployed in the Elasticsearch cluster. input_field (str) – The name of the key for the input text field in the document. Defaults to ‘text_field’. async aembed_documents(texts: List[str]) → List[List[float]]¶ Asynchronous Embed search docs. Parameters texts (List[str]) – Return type List[List[float]] async aembed_query(text: str) → List[float]¶ Asynchronous Embed query text. Parameters text (str) – Return type List[float] embed_documents(texts: List[str]) → List[List[float]][source]¶ Generate embeddings for a list of documents. Parameters texts (List[str]) – A list of document text strings to generate embeddings for. Returns A list of embeddings, one for each document in the inputlist. Return type List[List[float]] embed_query(text: str) → List[float][source]¶ Generate an embedding for a single query text. Parameters text (str) – The query text to generate an embedding for. Returns The embedding for the input query text. Return type List[float] classmethod from_credentials(model_id: str, *, es_cloud_id: Optional[str] = None, es_user: Optional[str] = None, es_password: Optional[str] = None, input_field: str = 'text_field') → ElasticsearchEmbeddings[source]¶ Instantiate embeddings from Elasticsearch credentials. Parameters
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.elasticsearch.ElasticsearchEmbeddings.html
0b6d14972f8b-2
Instantiate embeddings from Elasticsearch credentials. Parameters model_id (str) – The model_id of the model deployed in the Elasticsearch cluster. input_field (str) – The name of the key for the input text field in the document. Defaults to ‘text_field’. es_cloud_id (Optional[str]) – (str, optional): The Elasticsearch cloud ID to connect to. es_user (Optional[str]) – (str, optional): Elasticsearch username. es_password (Optional[str]) – (str, optional): Elasticsearch password. Return type ElasticsearchEmbeddings Example from langchain_community.embeddings import ElasticsearchEmbeddings # Define the model ID and input field name (if different from default) model_id = "your_model_id" # Optional, only if different from 'text_field' input_field = "your_input_field" # Credentials can be passed in two ways. Either set the env vars # ES_CLOUD_ID, ES_USER, ES_PASSWORD and they will be automatically # pulled in, or pass them in directly as kwargs. embeddings = ElasticsearchEmbeddings.from_credentials( model_id, input_field=input_field, # es_cloud_id="foo", # es_user="bar", # es_password="baz", ) documents = [ "This is an example document.", "Another example document to generate embeddings for.", ] embeddings_generator.embed_documents(documents) classmethod from_es_connection(model_id: str, es_connection: Elasticsearch, input_field: str = 'text_field') → ElasticsearchEmbeddings[source]¶ Instantiate embeddings from an existing Elasticsearch connection. This method provides a way to create an instance of the ElasticsearchEmbeddings class using an existing Elasticsearch connection. The connection object is used to create an MlClient, which is then used to initialize the ElasticsearchEmbeddings instance.
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.elasticsearch.ElasticsearchEmbeddings.html
0b6d14972f8b-3
ElasticsearchEmbeddings instance. Args: model_id (str): The model_id of the model deployed in the Elasticsearch cluster. es_connection (elasticsearch.Elasticsearch): An existing Elasticsearch connection object. input_field (str, optional): The name of the key for the input text field in the document. Defaults to ‘text_field’. Returns: ElasticsearchEmbeddings: An instance of the ElasticsearchEmbeddings class. Example from elasticsearch import Elasticsearch from langchain_community.embeddings import ElasticsearchEmbeddings # Define the model ID and input field name (if different from default) model_id = "your_model_id" # Optional, only if different from 'text_field' input_field = "your_input_field" # Create Elasticsearch connection es_connection = Elasticsearch( hosts=["localhost:9200"], http_auth=("user", "password") ) # Instantiate ElasticsearchEmbeddings using the existing connection embeddings = ElasticsearchEmbeddings.from_es_connection( model_id, es_connection, input_field=input_field, ) documents = [ "This is an example document.", "Another example document to generate embeddings for.", ] embeddings_generator.embed_documents(documents) Parameters model_id (str) – es_connection (Elasticsearch) – input_field (str) – Return type ElasticsearchEmbeddings
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.elasticsearch.ElasticsearchEmbeddings.html
51663de2563e-0
langchain_community.embeddings.volcengine.VolcanoEmbeddings¶ class langchain_community.embeddings.volcengine.VolcanoEmbeddings[source]¶ Bases: BaseModel, Embeddings Volcengine Embeddings embedding models. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param chunk_size: int = 100¶ Chunk size when multiple texts are input param client: Any = None¶ volcano client param host: str = 'maas-api.ml-platform-cn-beijing.volces.com'¶ host learn more from https://www.volcengine.com/docs/82379/1174746 param model: str = 'bge-large-zh'¶ Model name you could get from https://www.volcengine.com/docs/82379/1174746 for now, we support bge_large_zh param region: str = 'cn-beijing'¶ region learn more from https://www.volcengine.com/docs/82379/1174746 param version: str = '1.0'¶ model version param volcano_ak: Optional[str] = None¶ volcano access key learn more from: https://www.volcengine.com/docs/6459/76491#ak-sk param volcano_sk: Optional[str] = None¶ volcano secret key learn more from: https://www.volcengine.com/docs/6459/76491#ak-sk async aembed_documents(texts: List[str]) → List[List[float]]¶ Asynchronous Embed search docs. Parameters texts (List[str]) – Return type List[List[float]] async aembed_query(text: str) → List[float]¶ Asynchronous Embed query text. Parameters text (str) – Return type
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.volcengine.VolcanoEmbeddings.html
51663de2563e-1
Asynchronous Embed query text. Parameters text (str) – Return type List[float] classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values Parameters _fields_set (Optional[SetStr]) – values (Any) – Return type Model copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep (bool) – set to True to make a deep copy of the model self (Model) – Returns new model instance Return type Model
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.volcengine.VolcanoEmbeddings.html
51663de2563e-2
self (Model) – Returns new model instance Return type Model dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶ Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – Return type DictStrAny embed_documents(texts: List[str]) → List[List[float]][source]¶ Embeds a list of text documents using the AutoVOT algorithm. Parameters texts (List[str]) – A list of text documents to embed. Returns A list of embeddings for each document in the input list.Each embedding is represented as a list of float values. Return type List[List[float]] embed_query(text: str) → List[float][source]¶ Embed query text. Parameters text (str) – Return type List[float] classmethod from_orm(obj: Any) → Model¶ Parameters obj (Any) – Return type Model
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.volcengine.VolcanoEmbeddings.html
51663de2563e-3
Parameters obj (Any) – Return type Model json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – encoder (Optional[Callable[[Any], Any]]) – models_as_dict (bool) – dumps_kwargs (Any) – Return type unicode classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ Parameters path (Union[str, Path]) – content_type (unicode) – encoding (unicode) – proto (Protocol) – allow_pickle (bool) – Return type Model classmethod parse_obj(obj: Any) → Model¶ Parameters obj (Any) – Return type Model
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.volcengine.VolcanoEmbeddings.html
51663de2563e-4
Parameters obj (Any) – Return type Model classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ Parameters b (Union[str, bytes]) – content_type (unicode) – encoding (unicode) – proto (Protocol) – allow_pickle (bool) – Return type Model classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ Parameters by_alias (bool) – ref_template (unicode) – Return type DictStrAny classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ Parameters by_alias (bool) – ref_template (unicode) – dumps_kwargs (Any) – Return type unicode classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. Parameters localns (Any) – Return type None classmethod validate(value: Any) → Model¶ Parameters value (Any) – Return type Model Examples using VolcanoEmbeddings¶ Volc Engine
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.volcengine.VolcanoEmbeddings.html
d6ff48495811-0
langchain_community.embeddings.modelscope_hub.ModelScopeEmbeddings¶ class langchain_community.embeddings.modelscope_hub.ModelScopeEmbeddings[source]¶ Bases: BaseModel, Embeddings ModelScopeHub embedding models. To use, you should have the modelscope python package installed. Example from langchain_community.embeddings import ModelScopeEmbeddings model_id = "damo/nlp_corom_sentence-embedding_english-base" embed = ModelScopeEmbeddings(model_id=model_id, model_revision="v1.0.0") Initialize the modelscope param embed: Any = None¶ param model_id: str = 'damo/nlp_corom_sentence-embedding_english-base'¶ Model name to use. param model_revision: Optional[str] = None¶ async aembed_documents(texts: List[str]) → List[List[float]]¶ Asynchronous Embed search docs. Parameters texts (List[str]) – Return type List[List[float]] async aembed_query(text: str) → List[float]¶ Asynchronous Embed query text. Parameters text (str) – Return type List[float] classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values Parameters _fields_set (Optional[SetStr]) – values (Any) – Return type Model
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.modelscope_hub.ModelScopeEmbeddings.html
d6ff48495811-1
values (Any) – Return type Model copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep (bool) – set to True to make a deep copy of the model self (Model) – Returns new model instance Return type Model dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶ Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) –
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.modelscope_hub.ModelScopeEmbeddings.html
d6ff48495811-2
exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – Return type DictStrAny embed_documents(texts: List[str]) → List[List[float]][source]¶ Compute doc embeddings using a modelscope embedding model. Parameters texts (List[str]) – The list of texts to embed. Returns List of embeddings, one for each text. Return type List[List[float]] embed_query(text: str) → List[float][source]¶ Compute query embeddings using a modelscope embedding model. Parameters text (str) – The text to embed. Returns Embeddings for the text. Return type List[float] classmethod from_orm(obj: Any) → Model¶ Parameters obj (Any) – Return type Model json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) –
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.modelscope_hub.ModelScopeEmbeddings.html
d6ff48495811-3
exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – encoder (Optional[Callable[[Any], Any]]) – models_as_dict (bool) – dumps_kwargs (Any) – Return type unicode classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ Parameters path (Union[str, Path]) – content_type (unicode) – encoding (unicode) – proto (Protocol) – allow_pickle (bool) – Return type Model classmethod parse_obj(obj: Any) → Model¶ Parameters obj (Any) – Return type Model classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ Parameters b (Union[str, bytes]) – content_type (unicode) – encoding (unicode) – proto (Protocol) – allow_pickle (bool) – Return type Model classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ Parameters by_alias (bool) – ref_template (unicode) – Return type DictStrAny classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ Parameters by_alias (bool) – ref_template (unicode) – dumps_kwargs (Any) – Return type unicode classmethod update_forward_refs(**localns: Any) → None¶
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.modelscope_hub.ModelScopeEmbeddings.html
d6ff48495811-4
Return type unicode classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. Parameters localns (Any) – Return type None classmethod validate(value: Any) → Model¶ Parameters value (Any) – Return type Model Examples using ModelScopeEmbeddings¶ ModelScope
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.modelscope_hub.ModelScopeEmbeddings.html
fb845ad6a8c2-0
langchain_ai21.embeddings.AI21Embeddings¶ class langchain_ai21.embeddings.AI21Embeddings[source]¶ Bases: Embeddings, AI21Base AI21 Embeddings embedding model. To use, you should have the ‘AI21_API_KEY’ environment variable set or pass as a named parameter to the constructor. Example from langchain_ai21 import AI21Embeddings embeddings = AI21Embeddings() query_result = embeddings.embed_query("Hello embeddings world!") Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param api_host: Optional[str] = None¶ param api_key: Optional[SecretStr] = None¶ Constraints type = string writeOnly = True format = password param batch_size: int = 128¶ Maximum number of texts to embed in each batch param num_retries: Optional[int] = None¶ param timeout_sec: Optional[float] = None¶ async aembed_documents(texts: List[str]) → List[List[float]]¶ Asynchronous Embed search docs. Parameters texts (List[str]) – Return type List[List[float]] async aembed_query(text: str) → List[float]¶ Asynchronous Embed query text. Parameters text (str) – Return type List[float] classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values Parameters _fields_set (Optional[SetStr]) – values (Any) – Return type Model
https://api.python.langchain.com/en/latest/embeddings/langchain_ai21.embeddings.AI21Embeddings.html
fb845ad6a8c2-1
values (Any) – Return type Model copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep (bool) – set to True to make a deep copy of the model self (Model) – Returns new model instance Return type Model dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶ Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) –
https://api.python.langchain.com/en/latest/embeddings/langchain_ai21.embeddings.AI21Embeddings.html
fb845ad6a8c2-2
exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – Return type DictStrAny embed_documents(texts: List[str], *, batch_size: Optional[int] = None, **kwargs: Any) → List[List[float]][source]¶ Embed search docs. Parameters texts (List[str]) – batch_size (Optional[int]) – kwargs (Any) – Return type List[List[float]] embed_query(text: str, *, batch_size: Optional[int] = None, **kwargs: Any) → List[float][source]¶ Embed query text. Parameters text (str) – batch_size (Optional[int]) – kwargs (Any) – Return type List[float] classmethod from_orm(obj: Any) → Model¶ Parameters obj (Any) – Return type Model json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) –
https://api.python.langchain.com/en/latest/embeddings/langchain_ai21.embeddings.AI21Embeddings.html
fb845ad6a8c2-3
by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – encoder (Optional[Callable[[Any], Any]]) – models_as_dict (bool) – dumps_kwargs (Any) – Return type unicode classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ Parameters path (Union[str, Path]) – content_type (unicode) – encoding (unicode) – proto (Protocol) – allow_pickle (bool) – Return type Model classmethod parse_obj(obj: Any) → Model¶ Parameters obj (Any) – Return type Model classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ Parameters b (Union[str, bytes]) – content_type (unicode) – encoding (unicode) – proto (Protocol) – allow_pickle (bool) – Return type Model classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ Parameters by_alias (bool) – ref_template (unicode) – Return type DictStrAny classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ Parameters by_alias (bool) – ref_template (unicode) – dumps_kwargs (Any) – Return type unicode
https://api.python.langchain.com/en/latest/embeddings/langchain_ai21.embeddings.AI21Embeddings.html
fb845ad6a8c2-4
dumps_kwargs (Any) – Return type unicode classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. Parameters localns (Any) – Return type None classmethod validate(value: Any) → Model¶ Parameters value (Any) – Return type Model
https://api.python.langchain.com/en/latest/embeddings/langchain_ai21.embeddings.AI21Embeddings.html
7b27ba6f8d50-0
langchain_community.embeddings.clova.ClovaEmbeddings¶ class langchain_community.embeddings.clova.ClovaEmbeddings[source]¶ Bases: BaseModel, Embeddings Clova’s embedding service. To use this service, you should have the following environment variables set with your API tokens and application ID, or pass them as named parameters to the constructor: CLOVA_EMB_API_KEY: API key for accessing Clova’s embedding service. CLOVA_EMB_APIGW_API_KEY: API gateway key for enhanced security. CLOVA_EMB_APP_ID: Application ID for identifying your application. Example from langchain_community.embeddings import ClovaEmbeddings embeddings = ClovaEmbeddings( clova_emb_api_key='your_clova_emb_api_key', clova_emb_apigw_api_key='your_clova_emb_apigw_api_key', app_id='your_app_id' ) query_text = "This is a test query." query_result = embeddings.embed_query(query_text) document_text = "This is a test document." document_result = embeddings.embed_documents([document_text]) Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param app_id: Optional[SecretStr] = None¶ Application ID for identifying your application. Constraints type = string writeOnly = True format = password param clova_emb_api_key: Optional[SecretStr] = None¶ API key for accessing Clova’s embedding service. Constraints type = string writeOnly = True format = password param clova_emb_apigw_api_key: Optional[SecretStr] = None¶ API gateway key for enhanced security. Constraints type = string writeOnly = True
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.clova.ClovaEmbeddings.html
7b27ba6f8d50-1
API gateway key for enhanced security. Constraints type = string writeOnly = True format = password param endpoint_url: str = 'https://clovastudio.apigw.ntruss.com/testapp/v1/api-tools/embedding'¶ Endpoint URL to use. param model: str = 'clir-emb-dolphin'¶ Embedding model name to use. async aembed_documents(texts: List[str]) → List[List[float]]¶ Asynchronous Embed search docs. Parameters texts (List[str]) – Return type List[List[float]] async aembed_query(text: str) → List[float]¶ Asynchronous Embed query text. Parameters text (str) – Return type List[float] classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values Parameters _fields_set (Optional[SetStr]) – values (Any) – Return type Model copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.clova.ClovaEmbeddings.html
7b27ba6f8d50-2
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep (bool) – set to True to make a deep copy of the model self (Model) – Returns new model instance Return type Model dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶ Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – Return type DictStrAny embed_documents(texts: List[str]) → List[List[float]][source]¶ Embed a list of texts and return their embeddings. Parameters texts (List[str]) – The list of texts to embed. Returns List of embeddings, one for each text. Return type List[List[float]] embed_query(text: str) → List[float][source]¶ Embed a single query text and return its embedding. Parameters text (str) – The text to embed.
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.clova.ClovaEmbeddings.html
7b27ba6f8d50-3
Parameters text (str) – The text to embed. Returns Embeddings for the text. Return type List[float] classmethod from_orm(obj: Any) → Model¶ Parameters obj (Any) – Return type Model json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – encoder (Optional[Callable[[Any], Any]]) – models_as_dict (bool) – dumps_kwargs (Any) – Return type unicode classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ Parameters path (Union[str, Path]) – content_type (unicode) – encoding (unicode) – proto (Protocol) –
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.clova.ClovaEmbeddings.html
7b27ba6f8d50-4
encoding (unicode) – proto (Protocol) – allow_pickle (bool) – Return type Model classmethod parse_obj(obj: Any) → Model¶ Parameters obj (Any) – Return type Model classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ Parameters b (Union[str, bytes]) – content_type (unicode) – encoding (unicode) – proto (Protocol) – allow_pickle (bool) – Return type Model classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ Parameters by_alias (bool) – ref_template (unicode) – Return type DictStrAny classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ Parameters by_alias (bool) – ref_template (unicode) – dumps_kwargs (Any) – Return type unicode classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. Parameters localns (Any) – Return type None classmethod validate(value: Any) → Model¶ Parameters value (Any) – Return type Model Examples using ClovaEmbeddings¶ Clova Embeddings
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.clova.ClovaEmbeddings.html
39be7f3df1c3-0
langchain_community.embeddings.cloudflare_workersai.CloudflareWorkersAIEmbeddings¶ class langchain_community.embeddings.cloudflare_workersai.CloudflareWorkersAIEmbeddings[source]¶ Bases: BaseModel, Embeddings Cloudflare Workers AI embedding model. To use, you need to provide an API token and account ID to access Cloudflare Workers AI. Example from langchain_community.embeddings import CloudflareWorkersAIEmbeddings account_id = "my_account_id" api_token = "my_secret_api_token" model_name = "@cf/baai/bge-small-en-v1.5" cf = CloudflareWorkersAIEmbeddings( account_id=account_id, api_token=api_token, model_name=model_name ) Initialize the Cloudflare Workers AI client. param account_id: str [Required]¶ param api_base_url: str = 'https://api.cloudflare.com/client/v4/accounts'¶ param api_token: str [Required]¶ param batch_size: int = 50¶ param headers: Dict[str, str] = {'Authorization': 'Bearer '}¶ param model_name: str = '@cf/baai/bge-base-en-v1.5'¶ param strip_new_lines: bool = True¶ async aembed_documents(texts: List[str]) → List[List[float]]¶ Asynchronous Embed search docs. Parameters texts (List[str]) – Return type List[List[float]] async aembed_query(text: str) → List[float]¶ Asynchronous Embed query text. Parameters text (str) – Return type List[float] classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.cloudflare_workersai.CloudflareWorkersAIEmbeddings.html
39be7f3df1c3-1
Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values Parameters _fields_set (Optional[SetStr]) – values (Any) – Return type Model copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep (bool) – set to True to make a deep copy of the model self (Model) – Returns new model instance Return type Model dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶ Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) –
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.cloudflare_workersai.CloudflareWorkersAIEmbeddings.html
39be7f3df1c3-2
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – Return type DictStrAny embed_documents(texts: List[str]) → List[List[float]][source]¶ Compute doc embeddings using Cloudflare Workers AI. Parameters texts (List[str]) – The list of texts to embed. Returns List of embeddings, one for each text. Return type List[List[float]] embed_query(text: str) → List[float][source]¶ Compute query embeddings using Cloudflare Workers AI. Parameters text (str) – The text to embed. Returns Embeddings for the text. Return type List[float] classmethod from_orm(obj: Any) → Model¶ Parameters obj (Any) – Return type Model json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) –
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.cloudflare_workersai.CloudflareWorkersAIEmbeddings.html
39be7f3df1c3-3
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – encoder (Optional[Callable[[Any], Any]]) – models_as_dict (bool) – dumps_kwargs (Any) – Return type unicode classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ Parameters path (Union[str, Path]) – content_type (unicode) – encoding (unicode) – proto (Protocol) – allow_pickle (bool) – Return type Model classmethod parse_obj(obj: Any) → Model¶ Parameters obj (Any) – Return type Model classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ Parameters b (Union[str, bytes]) – content_type (unicode) – encoding (unicode) – proto (Protocol) – allow_pickle (bool) – Return type Model classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ Parameters by_alias (bool) – ref_template (unicode) – Return type DictStrAny
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.cloudflare_workersai.CloudflareWorkersAIEmbeddings.html
39be7f3df1c3-4
ref_template (unicode) – Return type DictStrAny classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ Parameters by_alias (bool) – ref_template (unicode) – dumps_kwargs (Any) – Return type unicode classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. Parameters localns (Any) – Return type None classmethod validate(value: Any) → Model¶ Parameters value (Any) – Return type Model Examples using CloudflareWorkersAIEmbeddings¶ Cloudflare Cloudflare Workers AI
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.cloudflare_workersai.CloudflareWorkersAIEmbeddings.html
9b8580107690-0
langchain_community.embeddings.mlflow.MlflowEmbeddings¶ class langchain_community.embeddings.mlflow.MlflowEmbeddings[source]¶ Bases: Embeddings, BaseModel Embedding LLMs in MLflow. To use, you should have the mlflow[genai] python package installed. For more information, see https://mlflow.org/docs/latest/llms/deployments. Example from langchain_community.embeddings import MlflowEmbeddings embeddings = MlflowEmbeddings( target_uri="http://localhost:5000", endpoint="embeddings", ) param documents_params: Dict[str, str] = {}¶ param endpoint: str [Required]¶ The endpoint to use. param query_params: Dict[str, str] = {}¶ The parameters to use for documents. param target_uri: str [Required]¶ The target URI to use. async aembed_documents(texts: List[str]) → List[List[float]]¶ Asynchronous Embed search docs. Parameters texts (List[str]) – Return type List[List[float]] async aembed_query(text: str) → List[float]¶ Asynchronous Embed query text. Parameters text (str) – Return type List[float] classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values Parameters _fields_set (Optional[SetStr]) – values (Any) – Return type Model
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.mlflow.MlflowEmbeddings.html
9b8580107690-1
values (Any) – Return type Model copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep (bool) – set to True to make a deep copy of the model self (Model) – Returns new model instance Return type Model dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶ Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) –
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.mlflow.MlflowEmbeddings.html
9b8580107690-2
exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – Return type DictStrAny embed(texts: List[str], params: Dict[str, str]) → List[List[float]][source]¶ Parameters texts (List[str]) – params (Dict[str, str]) – Return type List[List[float]] embed_documents(texts: List[str]) → List[List[float]][source]¶ Embed search docs. Parameters texts (List[str]) – Return type List[List[float]] embed_query(text: str) → List[float][source]¶ Embed query text. Parameters text (str) – Return type List[float] classmethod from_orm(obj: Any) → Model¶ Parameters obj (Any) – Return type Model json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) –
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.mlflow.MlflowEmbeddings.html
9b8580107690-3
skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – encoder (Optional[Callable[[Any], Any]]) – models_as_dict (bool) – dumps_kwargs (Any) – Return type unicode classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ Parameters path (Union[str, Path]) – content_type (unicode) – encoding (unicode) – proto (Protocol) – allow_pickle (bool) – Return type Model classmethod parse_obj(obj: Any) → Model¶ Parameters obj (Any) – Return type Model classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ Parameters b (Union[str, bytes]) – content_type (unicode) – encoding (unicode) – proto (Protocol) – allow_pickle (bool) – Return type Model classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ Parameters by_alias (bool) – ref_template (unicode) – Return type DictStrAny classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ Parameters by_alias (bool) – ref_template (unicode) – dumps_kwargs (Any) – Return type unicode
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.mlflow.MlflowEmbeddings.html
9b8580107690-4
dumps_kwargs (Any) – Return type unicode classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. Parameters localns (Any) – Return type None classmethod validate(value: Any) → Model¶ Parameters value (Any) – Return type Model Examples using MlflowEmbeddings¶ MLflow Deployments for LLMs
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.mlflow.MlflowEmbeddings.html