id
stringlengths
14
16
text
stringlengths
20
3.26k
source
stringlengths
65
181
101afca65472-0
langchain_community.embeddings.sparkllm.Url¶ class langchain_community.embeddings.sparkllm.Url(host: str, path: str, schema: str)[source]¶ Methods __init__(host, path, schema) Parameters host (str) – path (str) – schema (str) – __init__(host: str, path: str, schema: str) → None[source]¶ Parameters host (str) – path (str) – schema (str) – Return type None
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.sparkllm.Url.html
fb9a8eed5ed2-0
langchain_community.embeddings.vertexai.VertexAIEmbeddings¶ class langchain_community.embeddings.vertexai.VertexAIEmbeddings[source]¶ Bases: _VertexAICommon, Embeddings [Deprecated] Google Cloud VertexAI embedding models. Notes Deprecated since version 0.0.12. Initialize the sentence_transformer. param credentials: Any = None¶ The default custom credentials (google.auth.credentials.Credentials) to use param location: str = 'us-central1'¶ The default location to use when making API calls. param max_output_tokens: int = 128¶ Token limit determines the maximum amount of text output from one prompt. param max_retries: int = 6¶ The maximum number of retries to make when generating. param model_name: str [Required]¶ Underlying model name. param n: int = 1¶ How many completions to generate for each prompt. param project: Optional[str] = None¶ The default GCP project to use when making Vertex API calls. param request_parallelism: int = 5¶ The amount of parallelism allowed for requests issued to VertexAI models. param show_progress_bar: bool = False¶ Whether to show a tqdm progress bar. Must have tqdm installed. param stop: Optional[List[str]] = None¶ Optional list of stop words to use when generating. param streaming: bool = False¶ Whether to stream the results or not. param temperature: float = 0.0¶ Sampling temperature, it controls the degree of randomness in token selection. param top_k: int = 40¶ How the model selects tokens for output, the next token is selected from param top_p: float = 0.95¶ Tokens are selected from most probable to least until the sum of their
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.vertexai.VertexAIEmbeddings.html
fb9a8eed5ed2-1
Tokens are selected from most probable to least until the sum of their async aembed_documents(texts: List[str]) → List[List[float]]¶ Asynchronous Embed search docs. Parameters texts (List[str]) – Return type List[List[float]] async aembed_query(text: str) → List[float]¶ Asynchronous Embed query text. Parameters text (str) – Return type List[float] classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values Parameters _fields_set (Optional[SetStr]) – values (Any) – Return type Model copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep (bool) – set to True to make a deep copy of the model self (Model) – Returns new model instance
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.vertexai.VertexAIEmbeddings.html
fb9a8eed5ed2-2
self (Model) – Returns new model instance Return type Model dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶ Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – Return type DictStrAny embed(texts: List[str], batch_size: int = 0, embeddings_task_type: Optional[Literal['RETRIEVAL_QUERY', 'RETRIEVAL_DOCUMENT', 'SEMANTIC_SIMILARITY', 'CLASSIFICATION', 'CLUSTERING']] = None) → List[List[float]][source]¶ Embed a list of strings. Parameters texts (List[str]) – List[str] The list of strings to embed. batch_size (int) – [int] The batch size of embeddings to send to the model. If zero, then the largest batch size will be detected dynamically at the first request, starting from 250, down to 5.
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.vertexai.VertexAIEmbeddings.html
fb9a8eed5ed2-3
at the first request, starting from 250, down to 5. embeddings_task_type (Optional[Literal['RETRIEVAL_QUERY', 'RETRIEVAL_DOCUMENT', 'SEMANTIC_SIMILARITY', 'CLASSIFICATION', 'CLUSTERING']]) – [str] optional embeddings task type, one of the following RETRIEVAL_QUERY - Text is a queryin a search/retrieval setting. RETRIEVAL_DOCUMENT - Text is a documentin a search/retrieval setting. SEMANTIC_SIMILARITY - Embeddings will be usedfor Semantic Textual Similarity (STS). CLASSIFICATION - Embeddings will be used for classification. CLUSTERING - Embeddings will be used for clustering. Returns List of embeddings, one for each text. Return type List[List[float]] embed_documents(texts: List[str], batch_size: int = 0) → List[List[float]][source]¶ Embed a list of documents. Parameters texts (List[str]) – List[str] The list of texts to embed. batch_size (int) – [int] The batch size of embeddings to send to the model. If zero, then the largest batch size will be detected dynamically at the first request, starting from 250, down to 5. Returns List of embeddings, one for each text. Return type List[List[float]] embed_query(text: str) → List[float][source]¶ Embed a text. Parameters text (str) – The text to embed. Returns Embedding for the text. Return type List[float] classmethod from_orm(obj: Any) → Model¶ Parameters obj (Any) – Return type Model
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.vertexai.VertexAIEmbeddings.html
fb9a8eed5ed2-4
Parameters obj (Any) – Return type Model json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – encoder (Optional[Callable[[Any], Any]]) – models_as_dict (bool) – dumps_kwargs (Any) – Return type unicode classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ Parameters path (Union[str, Path]) – content_type (unicode) – encoding (unicode) – proto (Protocol) – allow_pickle (bool) – Return type Model classmethod parse_obj(obj: Any) → Model¶ Parameters obj (Any) – Return type Model
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.vertexai.VertexAIEmbeddings.html
fb9a8eed5ed2-5
Parameters obj (Any) – Return type Model classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ Parameters b (Union[str, bytes]) – content_type (unicode) – encoding (unicode) – proto (Protocol) – allow_pickle (bool) – Return type Model classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ Parameters by_alias (bool) – ref_template (unicode) – Return type DictStrAny classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ Parameters by_alias (bool) – ref_template (unicode) – dumps_kwargs (Any) – Return type unicode classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. Parameters localns (Any) – Return type None classmethod validate(value: Any) → Model¶ Parameters value (Any) – Return type Model property is_codey_model: bool¶ task_executor: ClassVar[Optional[Executor]] = FieldInfo(exclude=True, extra={})¶ Examples using VertexAIEmbeddings¶ Google Google AlloyDB for PostgreSQL Google BigQuery Vector Search Google Cloud SQL for MySQL Google Cloud SQL for PostgreSQL Google Cloud Vertex AI Reranker Google Firestore (Native Mode) Google Spanner Google Vertex AI PaLM Google Vertex AI Vector Search
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.vertexai.VertexAIEmbeddings.html
ca320176a2fb-0
langchain_community.embeddings.huggingface.HuggingFaceBgeEmbeddings¶ class langchain_community.embeddings.huggingface.HuggingFaceBgeEmbeddings[source]¶ Bases: BaseModel, Embeddings HuggingFace sentence_transformers embedding models. To use, you should have the sentence_transformers python package installed. To use Nomic, make sure the version of sentence_transformers >= 2.3.0. Bge Example: from langchain_community.embeddings import HuggingFaceBgeEmbeddings model_name = "BAAI/bge-large-en" model_kwargs = {'device': 'cpu'} encode_kwargs = {'normalize_embeddings': True} hf = HuggingFaceBgeEmbeddings( model_name=model_name, model_kwargs=model_kwargs, encode_kwargs=encode_kwargs ) Nomic Example:from langchain_community.embeddings import HuggingFaceBgeEmbeddings model_name = "nomic-ai/nomic-embed-text-v1" model_kwargs = { 'device': 'cpu', 'trust_remote_code':True } encode_kwargs = {'normalize_embeddings': True} hf = HuggingFaceBgeEmbeddings( model_name=model_name, model_kwargs=model_kwargs, encode_kwargs=encode_kwargs, query_instruction = "search_query:", embed_instruction = "search_document:" ) Initialize the sentence_transformer. param cache_folder: Optional[str] = None¶ Path to store models. Can be also set by SENTENCE_TRANSFORMERS_HOME environment variable. param embed_instruction: str = ''¶ Instruction to use for embedding document. param encode_kwargs: Dict[str, Any] [Optional]¶ Keyword arguments to pass when calling the encode method of the model. param model_kwargs: Dict[str, Any] [Optional]¶
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.huggingface.HuggingFaceBgeEmbeddings.html
ca320176a2fb-1
param model_kwargs: Dict[str, Any] [Optional]¶ Keyword arguments to pass to the model. param model_name: str = 'BAAI/bge-large-en'¶ Model name to use. param query_instruction: str = 'Represent this question for searching relevant passages: '¶ Instruction to use for embedding query. async aembed_documents(texts: List[str]) → List[List[float]]¶ Asynchronous Embed search docs. Parameters texts (List[str]) – Return type List[List[float]] async aembed_query(text: str) → List[float]¶ Asynchronous Embed query text. Parameters text (str) – Return type List[float] classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values Parameters _fields_set (Optional[SetStr]) – values (Any) – Return type Model copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.huggingface.HuggingFaceBgeEmbeddings.html
ca320176a2fb-2
update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep (bool) – set to True to make a deep copy of the model self (Model) – Returns new model instance Return type Model dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶ Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – Return type DictStrAny embed_documents(texts: List[str]) → List[List[float]][source]¶ Compute doc embeddings using a HuggingFace transformer model. Parameters texts (List[str]) – The list of texts to embed. Returns List of embeddings, one for each text. Return type List[List[float]] embed_query(text: str) → List[float][source]¶ Compute query embeddings using a HuggingFace transformer model. Parameters text (str) – The text to embed. Returns Embeddings for the text. Return type List[float] classmethod from_orm(obj: Any) → Model¶ Parameters
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.huggingface.HuggingFaceBgeEmbeddings.html
ca320176a2fb-3
List[float] classmethod from_orm(obj: Any) → Model¶ Parameters obj (Any) – Return type Model json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – encoder (Optional[Callable[[Any], Any]]) – models_as_dict (bool) – dumps_kwargs (Any) – Return type unicode classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ Parameters path (Union[str, Path]) – content_type (unicode) – encoding (unicode) – proto (Protocol) – allow_pickle (bool) – Return type Model classmethod parse_obj(obj: Any) → Model¶ Parameters
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.huggingface.HuggingFaceBgeEmbeddings.html
ca320176a2fb-4
Return type Model classmethod parse_obj(obj: Any) → Model¶ Parameters obj (Any) – Return type Model classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ Parameters b (Union[str, bytes]) – content_type (unicode) – encoding (unicode) – proto (Protocol) – allow_pickle (bool) – Return type Model classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ Parameters by_alias (bool) – ref_template (unicode) – Return type DictStrAny classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ Parameters by_alias (bool) – ref_template (unicode) – dumps_kwargs (Any) – Return type unicode classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. Parameters localns (Any) – Return type None classmethod validate(value: Any) → Model¶ Parameters value (Any) – Return type Model Examples using HuggingFaceBgeEmbeddings¶ BGE on Hugging Face Hugging Face
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.huggingface.HuggingFaceBgeEmbeddings.html
b2b62eabf320-0
langchain_community.embeddings.aleph_alpha.AlephAlphaAsymmetricSemanticEmbedding¶ class langchain_community.embeddings.aleph_alpha.AlephAlphaAsymmetricSemanticEmbedding[source]¶ Bases: BaseModel, Embeddings Aleph Alpha’s asymmetric semantic embedding. AA provides you with an endpoint to embed a document and a query. The models were optimized to make the embeddings of documents and the query for a document as similar as possible. To learn more, check out: https://docs.aleph-alpha.com/docs/tasks/semantic_embed/ Example Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param aleph_alpha_api_key: Optional[str] = None¶ API key for Aleph Alpha API. param compress_to_size: Optional[int] = None¶ Should the returned embeddings come back as an original 5120-dim vector, or should it be compressed to 128-dim. param contextual_control_threshold: Optional[int] = None¶ Attention control parameters only apply to those tokens that have explicitly been set in the request. param control_log_additive: bool = True¶ Apply controls on prompt items by adding the log(control_factor) to attention scores. param host: str = 'https://api.aleph-alpha.com'¶ The hostname of the API host. The default one is “https://api.aleph-alpha.com”) param hosting: Optional[str] = None¶ Determines in which datacenters the request may be processed. You can either set the parameter to “aleph-alpha” or omit it (defaulting to None). Not setting this value, or setting it to None, gives us maximal flexibility in processing your request in our own datacenters and on servers hosted with other providers.
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.aleph_alpha.AlephAlphaAsymmetricSemanticEmbedding.html
b2b62eabf320-1
in processing your request in our own datacenters and on servers hosted with other providers. Choose this option for maximal availability. Setting it to “aleph-alpha” allows us to only process the request in our own datacenters. Choose this option for maximal data privacy. param model: str = 'luminous-base'¶ Model name to use. param nice: bool = False¶ Setting this to True, will signal to the API that you intend to be nice to other users by de-prioritizing your request below concurrent ones. param normalize: bool = False¶ Should returned embeddings be normalized param request_timeout_seconds: int = 305¶ Client timeout that will be set for HTTP requests in the requests library’s API calls. Server will close all requests after 300 seconds with an internal server error. param total_retries: int = 8¶ The number of retries made in case requests fail with certain retryable status codes. If the last retry fails a corresponding exception is raised. Note, that between retries an exponential backoff is applied, starting with 0.5 s after the first retry and doubling for each retry made. So with the default setting of 8 retries a total wait time of 63.5 s is added between the retries. async aembed_documents(texts: List[str]) → List[List[float]]¶ Asynchronous Embed search docs. Parameters texts (List[str]) – Return type List[List[float]] async aembed_query(text: str) → List[float]¶ Asynchronous Embed query text. Parameters text (str) – Return type List[float] classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.aleph_alpha.AlephAlphaAsymmetricSemanticEmbedding.html
b2b62eabf320-2
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values Parameters _fields_set (Optional[SetStr]) – values (Any) – Return type Model copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep (bool) – set to True to make a deep copy of the model self (Model) – Returns new model instance Return type Model dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶ Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. Parameters
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.aleph_alpha.AlephAlphaAsymmetricSemanticEmbedding.html
b2b62eabf320-3
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – Return type DictStrAny embed_documents(texts: List[str]) → List[List[float]][source]¶ Call out to Aleph Alpha’s asymmetric Document endpoint. Parameters texts (List[str]) – The list of texts to embed. Returns List of embeddings, one for each text. Return type List[List[float]] embed_query(text: str) → List[float][source]¶ Call out to Aleph Alpha’s asymmetric, query embedding endpoint :param text: The text to embed. Returns Embeddings for the text. Parameters text (str) – Return type List[float] classmethod from_orm(obj: Any) → Model¶ Parameters obj (Any) – Return type Model json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict().
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.aleph_alpha.AlephAlphaAsymmetricSemanticEmbedding.html
b2b62eabf320-4
Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – encoder (Optional[Callable[[Any], Any]]) – models_as_dict (bool) – dumps_kwargs (Any) – Return type unicode classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ Parameters path (Union[str, Path]) – content_type (unicode) – encoding (unicode) – proto (Protocol) – allow_pickle (bool) – Return type Model classmethod parse_obj(obj: Any) → Model¶ Parameters obj (Any) – Return type Model classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ Parameters b (Union[str, bytes]) – content_type (unicode) – encoding (unicode) – proto (Protocol) – allow_pickle (bool) – Return type Model classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ Parameters by_alias (bool) –
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.aleph_alpha.AlephAlphaAsymmetricSemanticEmbedding.html
b2b62eabf320-5
Parameters by_alias (bool) – ref_template (unicode) – Return type DictStrAny classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ Parameters by_alias (bool) – ref_template (unicode) – dumps_kwargs (Any) – Return type unicode classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. Parameters localns (Any) – Return type None classmethod validate(value: Any) → Model¶ Parameters value (Any) – Return type Model Examples using AlephAlphaAsymmetricSemanticEmbedding¶ Aleph Alpha
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.aleph_alpha.AlephAlphaAsymmetricSemanticEmbedding.html
71c476e19fb6-0
langchain_community.embeddings.dashscope.DashScopeEmbeddings¶ class langchain_community.embeddings.dashscope.DashScopeEmbeddings[source]¶ Bases: BaseModel, Embeddings DashScope embedding models. To use, you should have the dashscope python package installed, and the environment variable DASHSCOPE_API_KEY set with your API key or pass it as a named parameter to the constructor. Example from langchain_community.embeddings import DashScopeEmbeddings embeddings = DashScopeEmbeddings(dashscope_api_key="my-api-key") Example import os os.environ["DASHSCOPE_API_KEY"] = "your DashScope API KEY" from langchain_community.embeddings.dashscope import DashScopeEmbeddings embeddings = DashScopeEmbeddings( model="text-embedding-v1", ) text = "This is a test query." query_result = embeddings.embed_query(text) Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param client: Any = None¶ The DashScope client. param dashscope_api_key: Optional[str] = None¶ param max_retries: int = 5¶ Maximum number of retries to make when generating. param model: str = 'text-embedding-v1'¶ async aembed_documents(texts: List[str]) → List[List[float]]¶ Asynchronous Embed search docs. Parameters texts (List[str]) – Return type List[List[float]] async aembed_query(text: str) → List[float]¶ Asynchronous Embed query text. Parameters text (str) – Return type List[float] classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.dashscope.DashScopeEmbeddings.html
71c476e19fb6-1
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values Parameters _fields_set (Optional[SetStr]) – values (Any) – Return type Model copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep (bool) – set to True to make a deep copy of the model self (Model) – Returns new model instance Return type Model dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶ Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. Parameters
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.dashscope.DashScopeEmbeddings.html
71c476e19fb6-2
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – Return type DictStrAny embed_documents(texts: List[str]) → List[List[float]][source]¶ Call out to DashScope’s embedding endpoint for embedding search docs. Parameters texts (List[str]) – The list of texts to embed. chunk_size – The chunk size of embeddings. If None, will use the chunk size specified by the class. Returns List of embeddings, one for each text. Return type List[List[float]] embed_query(text: str) → List[float][source]¶ Call out to DashScope’s embedding endpoint for embedding query text. Parameters text (str) – The text to embed. Returns Embedding for the text. Return type List[float] classmethod from_orm(obj: Any) → Model¶ Parameters obj (Any) – Return type Model json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.dashscope.DashScopeEmbeddings.html
71c476e19fb6-3
Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – encoder (Optional[Callable[[Any], Any]]) – models_as_dict (bool) – dumps_kwargs (Any) – Return type unicode classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ Parameters path (Union[str, Path]) – content_type (unicode) – encoding (unicode) – proto (Protocol) – allow_pickle (bool) – Return type Model classmethod parse_obj(obj: Any) → Model¶ Parameters obj (Any) – Return type Model classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ Parameters b (Union[str, bytes]) – content_type (unicode) – encoding (unicode) – proto (Protocol) – allow_pickle (bool) – Return type Model classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ Parameters by_alias (bool) –
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.dashscope.DashScopeEmbeddings.html
71c476e19fb6-4
Parameters by_alias (bool) – ref_template (unicode) – Return type DictStrAny classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ Parameters by_alias (bool) – ref_template (unicode) – dumps_kwargs (Any) – Return type unicode classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. Parameters localns (Any) – Return type None classmethod validate(value: Any) → Model¶ Parameters value (Any) – Return type Model Examples using DashScopeEmbeddings¶ DashScope DashVector
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.dashscope.DashScopeEmbeddings.html
44d85d6bc3e4-0
langchain_community.embeddings.solar.SolarEmbeddings¶ class langchain_community.embeddings.solar.SolarEmbeddings[source]¶ Bases: BaseModel, Embeddings [Deprecated] Solar’s embedding service. To use, you should have the environment variable``SOLAR_API_KEY`` set with your API token, or pass it as a named parameter to the constructor. Example from langchain_community.embeddings import SolarEmbeddings embeddings = SolarEmbeddings() query_text = "This is a test query." query_result = embeddings.embed_query(query_text) document_text = "This is a test document." document_result = embeddings.embed_documents([document_text]) Notes Deprecated since version 0.0.34. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param endpoint_url: str = 'https://api.upstage.ai/v1/solar/embeddings'¶ Endpoint URL to use. param model: str = 'solar-1-mini-embedding-query'¶ Embeddings model name to use. param solar_api_key: Optional[SecretStr] = None¶ API Key for Solar API. Constraints type = string writeOnly = True format = password async aembed_documents(texts: List[str]) → List[List[float]]¶ Asynchronous Embed search docs. Parameters texts (List[str]) – Return type List[List[float]] async aembed_query(text: str) → List[float]¶ Asynchronous Embed query text. Parameters text (str) – Return type List[float] classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.solar.SolarEmbeddings.html
44d85d6bc3e4-1
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values Parameters _fields_set (Optional[SetStr]) – values (Any) – Return type Model copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep (bool) – set to True to make a deep copy of the model self (Model) – Returns new model instance Return type Model dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶ Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. Parameters
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.solar.SolarEmbeddings.html
44d85d6bc3e4-2
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – Return type DictStrAny embed(text: str) → List[List[float]][source]¶ Parameters text (str) – Return type List[List[float]] embed_documents(texts: List[str]) → List[List[float]][source]¶ Embed documents using a Solar embedding endpoint. Parameters texts (List[str]) – The list of texts to embed. Returns List of embeddings, one for each text. Return type List[List[float]] embed_query(text: str) → List[float][source]¶ Embed a query using a Solar embedding endpoint. Parameters text (str) – The text to embed. Returns Embeddings for the text. Return type List[float] classmethod from_orm(obj: Any) → Model¶ Parameters obj (Any) – Return type Model json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.solar.SolarEmbeddings.html
44d85d6bc3e4-3
Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – encoder (Optional[Callable[[Any], Any]]) – models_as_dict (bool) – dumps_kwargs (Any) – Return type unicode classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ Parameters path (Union[str, Path]) – content_type (unicode) – encoding (unicode) – proto (Protocol) – allow_pickle (bool) – Return type Model classmethod parse_obj(obj: Any) → Model¶ Parameters obj (Any) – Return type Model classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ Parameters b (Union[str, bytes]) – content_type (unicode) – encoding (unicode) – proto (Protocol) – allow_pickle (bool) – Return type Model classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ Parameters by_alias (bool) –
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.solar.SolarEmbeddings.html
44d85d6bc3e4-4
Parameters by_alias (bool) – ref_template (unicode) – Return type DictStrAny classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ Parameters by_alias (bool) – ref_template (unicode) – dumps_kwargs (Any) – Return type unicode classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. Parameters localns (Any) – Return type None classmethod validate(value: Any) → Model¶ Parameters value (Any) – Return type Model Examples using SolarEmbeddings¶ Solar
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.solar.SolarEmbeddings.html
a6ea7172062a-0
langchain_community.embeddings.fake.FakeEmbeddings¶ class langchain_community.embeddings.fake.FakeEmbeddings[source]¶ Bases: Embeddings, BaseModel Fake embedding model. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param size: int [Required]¶ The size of the embedding vector. async aembed_documents(texts: List[str]) → List[List[float]]¶ Asynchronous Embed search docs. Parameters texts (List[str]) – Return type List[List[float]] async aembed_query(text: str) → List[float]¶ Asynchronous Embed query text. Parameters text (str) – Return type List[float] classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values Parameters _fields_set (Optional[SetStr]) – values (Any) – Return type Model copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.fake.FakeEmbeddings.html
a6ea7172062a-1
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep (bool) – set to True to make a deep copy of the model self (Model) – Returns new model instance Return type Model dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶ Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – Return type DictStrAny embed_documents(texts: List[str]) → List[List[float]][source]¶ Embed search docs. Parameters texts (List[str]) – Return type List[List[float]] embed_query(text: str) → List[float][source]¶ Embed query text. Parameters text (str) – Return type List[float] classmethod from_orm(obj: Any) → Model¶ Parameters obj (Any) – Return type Model
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.fake.FakeEmbeddings.html
a6ea7172062a-2
Parameters obj (Any) – Return type Model json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – encoder (Optional[Callable[[Any], Any]]) – models_as_dict (bool) – dumps_kwargs (Any) – Return type unicode classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ Parameters path (Union[str, Path]) – content_type (unicode) – encoding (unicode) – proto (Protocol) – allow_pickle (bool) – Return type Model classmethod parse_obj(obj: Any) → Model¶ Parameters obj (Any) – Return type Model
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.fake.FakeEmbeddings.html
a6ea7172062a-3
Parameters obj (Any) – Return type Model classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ Parameters b (Union[str, bytes]) – content_type (unicode) – encoding (unicode) – proto (Protocol) – allow_pickle (bool) – Return type Model classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ Parameters by_alias (bool) – ref_template (unicode) – Return type DictStrAny classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ Parameters by_alias (bool) – ref_template (unicode) – dumps_kwargs (Any) – Return type unicode classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. Parameters localns (Any) – Return type None classmethod validate(value: Any) → Model¶ Parameters value (Any) – Return type Model Examples using FakeEmbeddings¶ Baidu VectorDB DocArray Fake Embeddings Google Memorystore for Redis PGVecto.rs Relyt Tair Tencent Cloud VectorDB Vectara Vectara
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.fake.FakeEmbeddings.html
c778090d3066-0
langchain_community.embeddings.gigachat.GigaChatEmbeddings¶ class langchain_community.embeddings.gigachat.GigaChatEmbeddings[source]¶ Bases: BaseModel, Embeddings GigaChat Embeddings models. Example Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param access_token: Optional[str] = None¶ Access token for GigaChat param auth_url: Optional[str] = None¶ Auth URL param base_url: Optional[str] = None¶ Base API URL param ca_bundle_file: Optional[str] = None¶ param cert_file: Optional[str] = None¶ param credentials: Optional[str] = None¶ Auth Token param key_file: Optional[str] = None¶ param key_file_password: Optional[str] = None¶ param model: Optional[str] = None¶ Model name to use. param password: Optional[str] = None¶ Password for authenticate param scope: Optional[str] = None¶ Permission scope for access token param timeout: Optional[float] = 600¶ Timeout for request. By default it works for long requests. param user: Optional[str] = None¶ Username for authenticate param verify_ssl_certs: Optional[bool] = None¶ Check certificates for all requests async aembed_documents(texts: List[str]) → List[List[float]][source]¶ Embed documents using a GigaChat embeddings models. Parameters texts (List[str]) – The list of texts to embed. Returns List of embeddings, one for each text. Return type List[List[float]] async aembed_query(text: str) → List[float][source]¶ Embed a query using a GigaChat embeddings models. Parameters text (str) – The text to embed. Returns
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.gigachat.GigaChatEmbeddings.html
c778090d3066-1
Parameters text (str) – The text to embed. Returns Embeddings for the text. Return type List[float] classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values Parameters _fields_set (Optional[SetStr]) – values (Any) – Return type Model copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep (bool) – set to True to make a deep copy of the model self (Model) – Returns new model instance Return type Model
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.gigachat.GigaChatEmbeddings.html
c778090d3066-2
self (Model) – Returns new model instance Return type Model dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶ Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – Return type DictStrAny embed_documents(texts: List[str]) → List[List[float]][source]¶ Embed documents using a GigaChat embeddings models. Parameters texts (List[str]) – The list of texts to embed. Returns List of embeddings, one for each text. Return type List[List[float]] embed_query(text: str) → List[float][source]¶ Embed a query using a GigaChat embeddings models. Parameters text (str) – The text to embed. Returns Embeddings for the text. Return type List[float] classmethod from_orm(obj: Any) → Model¶ Parameters obj (Any) – Return type Model
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.gigachat.GigaChatEmbeddings.html
c778090d3066-3
Parameters obj (Any) – Return type Model json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – encoder (Optional[Callable[[Any], Any]]) – models_as_dict (bool) – dumps_kwargs (Any) – Return type unicode classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ Parameters path (Union[str, Path]) – content_type (unicode) – encoding (unicode) – proto (Protocol) – allow_pickle (bool) – Return type Model classmethod parse_obj(obj: Any) → Model¶ Parameters obj (Any) – Return type Model
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.gigachat.GigaChatEmbeddings.html
c778090d3066-4
Parameters obj (Any) – Return type Model classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ Parameters b (Union[str, bytes]) – content_type (unicode) – encoding (unicode) – proto (Protocol) – allow_pickle (bool) – Return type Model classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ Parameters by_alias (bool) – ref_template (unicode) – Return type DictStrAny classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ Parameters by_alias (bool) – ref_template (unicode) – dumps_kwargs (Any) – Return type unicode classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. Parameters localns (Any) – Return type None classmethod validate(value: Any) → Model¶ Parameters value (Any) – Return type Model Examples using GigaChatEmbeddings¶ GigaChat Salute Devices
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.gigachat.GigaChatEmbeddings.html
123337754ad5-0
langchain_community.embeddings.jina.JinaEmbeddings¶ class langchain_community.embeddings.jina.JinaEmbeddings[source]¶ Bases: BaseModel, Embeddings Jina embedding models. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param jina_api_key: Optional[SecretStr] = None¶ Constraints type = string writeOnly = True format = password param model_name: str = 'jina-embeddings-v2-base-en'¶ async aembed_documents(texts: List[str]) → List[List[float]]¶ Asynchronous Embed search docs. Parameters texts (List[str]) – Return type List[List[float]] async aembed_query(text: str) → List[float]¶ Asynchronous Embed query text. Parameters text (str) – Return type List[float] classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values Parameters _fields_set (Optional[SetStr]) – values (Any) – Return type Model copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.jina.JinaEmbeddings.html
123337754ad5-1
Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep (bool) – set to True to make a deep copy of the model self (Model) – Returns new model instance Return type Model dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶ Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – Return type DictStrAny embed_documents(texts: List[str]) → List[List[float]][source]¶ Call out to Jina’s embedding endpoint. :param texts: The list of texts to embed. Returns List of embeddings, one for each text. Parameters texts (List[str]) –
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.jina.JinaEmbeddings.html
123337754ad5-2
List of embeddings, one for each text. Parameters texts (List[str]) – Return type List[List[float]] embed_images(uris: List[str]) → List[List[float]][source]¶ Call out to Jina’s image embedding endpoint. :param uris: The list of uris to embed. Returns List of embeddings, one for each text. Parameters uris (List[str]) – Return type List[List[float]] embed_query(text: str) → List[float][source]¶ Call out to Jina’s embedding endpoint. :param text: The text to embed. Returns Embeddings for the text. Parameters text (str) – Return type List[float] classmethod from_orm(obj: Any) → Model¶ Parameters obj (Any) – Return type Model json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) –
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.jina.JinaEmbeddings.html
123337754ad5-3
skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – encoder (Optional[Callable[[Any], Any]]) – models_as_dict (bool) – dumps_kwargs (Any) – Return type unicode classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ Parameters path (Union[str, Path]) – content_type (unicode) – encoding (unicode) – proto (Protocol) – allow_pickle (bool) – Return type Model classmethod parse_obj(obj: Any) → Model¶ Parameters obj (Any) – Return type Model classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ Parameters b (Union[str, bytes]) – content_type (unicode) – encoding (unicode) – proto (Protocol) – allow_pickle (bool) – Return type Model classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ Parameters by_alias (bool) – ref_template (unicode) – Return type DictStrAny classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ Parameters by_alias (bool) – ref_template (unicode) – dumps_kwargs (Any) – Return type unicode
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.jina.JinaEmbeddings.html
123337754ad5-4
dumps_kwargs (Any) – Return type unicode classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. Parameters localns (Any) – Return type None classmethod validate(value: Any) → Model¶ Parameters value (Any) – Return type Model Examples using JinaEmbeddings¶ Jina Jina Reranker
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.jina.JinaEmbeddings.html
d3ec891da3c3-0
langchain_community.embeddings.optimum_intel.QuantizedBiEncoderEmbeddings¶ class langchain_community.embeddings.optimum_intel.QuantizedBiEncoderEmbeddings[source]¶ Bases: BaseModel, Embeddings Quantized bi-encoders embedding models. Please ensure that you have installed optimum-intel and ipex. Input:model_name: str = Model name. max_seq_len: int = The maximum sequence length for tokenization. (default 512) pooling_strategy: str = “mean” or “cls”, pooling strategy for the final layer. (default “mean”) query_instruction: Optional[str] =An instruction to add to the query before embedding. (default None) document_instruction: Optional[str] =An instruction to add to each document before embedding. (default None) padding: Optional[bool] =Whether to add padding during tokenization or not. (default True) model_kwargs: Optional[Dict] =Parameters to add to the model during initialization. (default {}) encode_kwargs: Optional[Dict] =Parameters to add during the embedding forward pass. (default {}) Example: from langchain_community.embeddings import QuantizedBiEncoderEmbeddings model_name = “Intel/bge-small-en-v1.5-rag-int8-static” encode_kwargs = {‘normalize_embeddings’: True} hf = QuantizedBiEncoderEmbeddings( model_name, encode_kwargs=encode_kwargs, query_instruction=”Represent this sentence for searching relevant passages: “ ) Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. async aembed_documents(texts: List[str]) → List[List[float]]¶ Asynchronous Embed search docs. Parameters texts (List[str]) – Return type List[List[float]]
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.optimum_intel.QuantizedBiEncoderEmbeddings.html
d3ec891da3c3-1
Parameters texts (List[str]) – Return type List[List[float]] async aembed_query(text: str) → List[float]¶ Asynchronous Embed query text. Parameters text (str) – Return type List[float] classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values Parameters _fields_set (Optional[SetStr]) – values (Any) – Return type Model copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep (bool) – set to True to make a deep copy of the model self (Model) – Returns new model instance Return type Model
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.optimum_intel.QuantizedBiEncoderEmbeddings.html
d3ec891da3c3-2
self (Model) – Returns new model instance Return type Model dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶ Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – Return type DictStrAny embed_documents(texts: List[str]) → List[List[float]][source]¶ Embed a list of text documents using the Optimized Embedder model. Input:texts: List[str] = List of text documents to embed. Output:List[List[float]] = The embeddings of each text document. Parameters texts (List[str]) – Return type List[List[float]] embed_query(text: str) → List[float][source]¶ Embed query text. Parameters text (str) – Return type List[float] classmethod from_orm(obj: Any) → Model¶ Parameters obj (Any) – Return type Model
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.optimum_intel.QuantizedBiEncoderEmbeddings.html
d3ec891da3c3-3
Parameters obj (Any) – Return type Model json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – encoder (Optional[Callable[[Any], Any]]) – models_as_dict (bool) – dumps_kwargs (Any) – Return type unicode load_model() → None[source]¶ Return type None classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ Parameters path (Union[str, Path]) – content_type (unicode) – encoding (unicode) – proto (Protocol) – allow_pickle (bool) – Return type Model classmethod parse_obj(obj: Any) → Model¶ Parameters
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.optimum_intel.QuantizedBiEncoderEmbeddings.html
d3ec891da3c3-4
Return type Model classmethod parse_obj(obj: Any) → Model¶ Parameters obj (Any) – Return type Model classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ Parameters b (Union[str, bytes]) – content_type (unicode) – encoding (unicode) – proto (Protocol) – allow_pickle (bool) – Return type Model classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ Parameters by_alias (bool) – ref_template (unicode) – Return type DictStrAny classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ Parameters by_alias (bool) – ref_template (unicode) – dumps_kwargs (Any) – Return type unicode classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. Parameters localns (Any) – Return type None classmethod validate(value: Any) → Model¶ Parameters value (Any) – Return type Model Examples using QuantizedBiEncoderEmbeddings¶ Embedding Documents using Optimized and Quantized Embedders Intel
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.optimum_intel.QuantizedBiEncoderEmbeddings.html
02d53f91fa28-0
langchain_upstage.embeddings.UpstageEmbeddings¶ class langchain_upstage.embeddings.UpstageEmbeddings[source]¶ Bases: BaseModel, Embeddings UpstageEmbeddings embedding model. To use, set the environment variable UPSTAGE_API_KEY with your API key or pass it as a named parameter to the constructor. Example from langchain_upstage import UpstageEmbeddings model = UpstageEmbeddings(model='solar-embedding-1-large') Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param allowed_special: Union[Literal['all'], Set[str]] = {}¶ Not yet supported. param chunk_size: int = 1000¶ Maximum number of texts to embed in each batch. Not yet supported. param default_headers: Optional[Mapping[str, str]] = None¶ param default_query: Optional[Mapping[str, object]] = None¶ param dimensions: Optional[int] = None¶ The number of dimensions the resulting output embeddings should have. Not yet supported. param disallowed_special: Union[Literal['all'], Set[str], Sequence[str]] = 'all'¶ Not yet supported. param embed_batch_size: int = 10¶ param embedding_ctx_length: int = 4096¶ The maximum number of tokens to embed at once. Not yet supported. param http_async_client: Optional[Any] = None¶ Optional httpx.AsyncClient. Only used for async invocations. Must specify http_client as well if you’d like a custom client for sync invocations. param http_client: Optional[Any] = None¶ Optional httpx.Client. Only used for sync invocations. Must specify http_async_client as well if you’d like a custom client for async invocations.
https://api.python.langchain.com/en/latest/embeddings/langchain_upstage.embeddings.UpstageEmbeddings.html
02d53f91fa28-1
http_async_client as well if you’d like a custom client for async invocations. param max_retries: int = 2¶ Maximum number of retries to make when generating. param model: str [Required]¶ Embeddings model name to use. Do not add suffixes like -query and -passage. Instead, use ‘solar-embedding-1-large’ for example. param model_kwargs: Dict[str, Any] [Optional]¶ Holds any model parameters valid for create call not explicitly specified. param request_timeout: Optional[Union[float, Tuple[float, float], Any]] = None (alias 'timeout')¶ Timeout for requests to Upstage embedding API. Can be float, httpx.Timeout or None. param show_progress_bar: bool = False¶ Whether to show a progress bar when embedding. Not yet supported. param skip_empty: bool = False¶ Whether to skip empty strings when embedding or raise an error. Defaults to not skipping. Not yet supported. param upstage_api_base: str = 'https://api.upstage.ai/v1/solar' (alias 'base_url')¶ Endpoint URL to use. param upstage_api_key: Optional[SecretStr] = None (alias 'api_key')¶ API Key for Solar API. Constraints type = string writeOnly = True format = password async aembed_documents(texts: List[str]) → List[List[float]][source]¶ Embed a list of document texts using passage model asynchronously. Parameters texts (List[str]) – The list of texts to embed. Returns List of embeddings, one for each text. Return type List[List[float]] async aembed_query(text: str) → List[float][source]¶ Asynchronous Embed query text using query model. Parameters text (str) – The text to embed.
https://api.python.langchain.com/en/latest/embeddings/langchain_upstage.embeddings.UpstageEmbeddings.html
02d53f91fa28-2
Parameters text (str) – The text to embed. Returns Embedding for the text. Return type List[float] classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values Parameters _fields_set (Optional[SetStr]) – values (Any) – Return type Model copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep (bool) – set to True to make a deep copy of the model self (Model) – Returns new model instance Return type Model
https://api.python.langchain.com/en/latest/embeddings/langchain_upstage.embeddings.UpstageEmbeddings.html
02d53f91fa28-3
self (Model) – Returns new model instance Return type Model dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶ Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – Return type DictStrAny embed_documents(texts: List[str]) → List[List[float]][source]¶ Embed a list of document texts using passage model. Parameters texts (List[str]) – The list of texts to embed. Returns List of embeddings, one for each text. Return type List[List[float]] embed_query(text: str) → List[float][source]¶ Embed query text using query model. Parameters text (str) – The text to embed. Returns Embedding for the text. Return type List[float] classmethod from_orm(obj: Any) → Model¶ Parameters obj (Any) – Return type Model
https://api.python.langchain.com/en/latest/embeddings/langchain_upstage.embeddings.UpstageEmbeddings.html
02d53f91fa28-4
Parameters obj (Any) – Return type Model json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – encoder (Optional[Callable[[Any], Any]]) – models_as_dict (bool) – dumps_kwargs (Any) – Return type unicode classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ Parameters path (Union[str, Path]) – content_type (unicode) – encoding (unicode) – proto (Protocol) – allow_pickle (bool) – Return type Model classmethod parse_obj(obj: Any) → Model¶ Parameters obj (Any) – Return type Model
https://api.python.langchain.com/en/latest/embeddings/langchain_upstage.embeddings.UpstageEmbeddings.html
02d53f91fa28-5
Parameters obj (Any) – Return type Model classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ Parameters b (Union[str, bytes]) – content_type (unicode) – encoding (unicode) – proto (Protocol) – allow_pickle (bool) – Return type Model classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ Parameters by_alias (bool) – ref_template (unicode) – Return type DictStrAny classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ Parameters by_alias (bool) – ref_template (unicode) – dumps_kwargs (Any) – Return type unicode classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. Parameters localns (Any) – Return type None classmethod validate(value: Any) → Model¶ Parameters value (Any) – Return type Model
https://api.python.langchain.com/en/latest/embeddings/langchain_upstage.embeddings.UpstageEmbeddings.html
722e4139607c-0
langchain_community.embeddings.titan_takeoff.TitanTakeoffEmbed¶ class langchain_community.embeddings.titan_takeoff.TitanTakeoffEmbed(base_url: str = 'http://localhost', port: int = 3000, mgmt_port: int = 3001, models: List[ReaderConfig] = [])[source]¶ Interface with Takeoff Inference API for embedding models. Use it to send embedding requests and to deploy embedding readers with Takeoff. Examples This is an example how to deploy an embedding model and send requests. Initialize the Titan Takeoff embedding wrapper. Parameters base_url (str, optional) – The base url where Takeoff Inference Server is "http (listening. Defaults to) – //localhost”. port (int, optional) – What port is Takeoff Inference API listening on. 3000. (Defaults to) – mgmt_port (int, optional) – What port is Takeoff Management API listening on. 3001. (Defaults to) – models (List[ReaderConfig], optional) – Any readers you’d like to spin up on. []. (Defaults to) – Raises ImportError – If you haven’t installed takeoff-client, you will get an ImportError. To remedy run pip install 'takeoff-client==0.4.0' – Attributes base_url //localhost". client Takeoff Client Python SDK used to interact with Takeoff API embed_consumer_groups The consumer groups in Takeoff which contain embedding models mgmt_port The management port of the Titan Takeoff (Pro) server. port The port of the Titan Takeoff (Pro) server. Methods __init__([base_url, port, mgmt_port, models]) Initialize the Titan Takeoff embedding wrapper. aembed_documents(texts)
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.titan_takeoff.TitanTakeoffEmbed.html
722e4139607c-1
Initialize the Titan Takeoff embedding wrapper. aembed_documents(texts) Asynchronous Embed search docs. aembed_query(text) Asynchronous Embed query text. embed_documents(texts[, consumer_group]) Embed documents. embed_query(text[, consumer_group]) Embed query. __init__(base_url: str = 'http://localhost', port: int = 3000, mgmt_port: int = 3001, models: List[ReaderConfig] = [])[source]¶ Initialize the Titan Takeoff embedding wrapper. Parameters base_url (str, optional) – The base url where Takeoff Inference Server is "http (listening. Defaults to) – //localhost”. port (int, optional) – What port is Takeoff Inference API listening on. 3000. (Defaults to) – mgmt_port (int, optional) – What port is Takeoff Management API listening on. 3001. (Defaults to) – models (List[ReaderConfig], optional) – Any readers you’d like to spin up on. []. (Defaults to) – Raises ImportError – If you haven’t installed takeoff-client, you will get an ImportError. To remedy run pip install 'takeoff-client==0.4.0' – async aembed_documents(texts: List[str]) → List[List[float]]¶ Asynchronous Embed search docs. Parameters texts (List[str]) – Return type List[List[float]] async aembed_query(text: str) → List[float]¶ Asynchronous Embed query text. Parameters text (str) – Return type List[float] embed_documents(texts: List[str], consumer_group: Optional[str] = None) → List[List[float]][source]¶ Embed documents. Parameters
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.titan_takeoff.TitanTakeoffEmbed.html
722e4139607c-2
Embed documents. Parameters texts (List[str]) – List of prompts/documents to embed consumer_group (Optional[str], optional) – Consumer group to send request None. (to containing embedding model. Defaults to) – Returns List of embeddings Return type List[List[float]] embed_query(text: str, consumer_group: Optional[str] = None) → List[float][source]¶ Embed query. Parameters text (str) – Prompt/document to embed consumer_group (Optional[str], optional) – Consumer group to send request None. (to containing embedding model. Defaults to) – Returns Embedding Return type List[float] Examples using TitanTakeoffEmbed¶ Titan Takeoff
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.titan_takeoff.TitanTakeoffEmbed.html
4d674ab74177-0
langchain_community.embeddings.google_palm.embed_with_retry¶ langchain_community.embeddings.google_palm.embed_with_retry(embeddings: GooglePalmEmbeddings, *args: Any, **kwargs: Any) → Any[source]¶ Use tenacity to retry the completion call. Parameters embeddings (GooglePalmEmbeddings) – args (Any) – kwargs (Any) – Return type Any
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.google_palm.embed_with_retry.html
aa21f048efe7-0
langchain_community.embeddings.gradient_ai.GradientEmbeddings¶ class langchain_community.embeddings.gradient_ai.GradientEmbeddings[source]¶ Bases: BaseModel, Embeddings Gradient.ai Embedding models. GradientLLM is a class to interact with Embedding Models on gradient.ai To use, set the environment variable GRADIENT_ACCESS_TOKEN with your API token and GRADIENT_WORKSPACE_ID for your gradient workspace, or alternatively provide them as keywords to the constructor of this class. Example from langchain_community.embeddings import GradientEmbeddings GradientEmbeddings( model="bge-large", gradient_workspace_id="12345614fc0_workspace", gradient_access_token="gradientai-access_token", ) Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param client: Any = None¶ Gradient client. param gradient_access_token: Optional[str] = None¶ gradient.ai API Token, which can be generated by going to https://auth.gradient.ai/select-workspace and selecting “Access tokens” under the profile drop-down. param gradient_api_url: str = 'https://api.gradient.ai/api'¶ Endpoint URL to use. param gradient_workspace_id: Optional[str] = None¶ Underlying gradient.ai workspace_id. param model: str [Required]¶ Underlying gradient.ai model id. param query_prompt_for_retrieval: Optional[str] = None¶ Query pre-prompt async aembed_documents(texts: List[str]) → List[List[float]][source]¶ Async call out to Gradient’s embedding endpoint. Parameters texts (List[str]) – The list of texts to embed. Returns List of embeddings, one for each text. Return type List[List[float]]
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.gradient_ai.GradientEmbeddings.html
aa21f048efe7-1
Returns List of embeddings, one for each text. Return type List[List[float]] async aembed_query(text: str) → List[float][source]¶ Async call out to Gradient’s embedding endpoint. Parameters text (str) – The text to embed. Returns Embeddings for the text. Return type List[float] classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values Parameters _fields_set (Optional[SetStr]) – values (Any) – Return type Model copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep (bool) – set to True to make a deep copy of the model self (Model) – Returns new model instance Return type Model
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.gradient_ai.GradientEmbeddings.html
aa21f048efe7-2
self (Model) – Returns new model instance Return type Model dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶ Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – Return type DictStrAny embed_documents(texts: List[str]) → List[List[float]][source]¶ Call out to Gradient’s embedding endpoint. Parameters texts (List[str]) – The list of texts to embed. Returns List of embeddings, one for each text. Return type List[List[float]] embed_query(text: str) → List[float][source]¶ Call out to Gradient’s embedding endpoint. Parameters text (str) – The text to embed. Returns Embeddings for the text. Return type List[float] classmethod from_orm(obj: Any) → Model¶ Parameters obj (Any) – Return type Model
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.gradient_ai.GradientEmbeddings.html
aa21f048efe7-3
Parameters obj (Any) – Return type Model json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – encoder (Optional[Callable[[Any], Any]]) – models_as_dict (bool) – dumps_kwargs (Any) – Return type unicode classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ Parameters path (Union[str, Path]) – content_type (unicode) – encoding (unicode) – proto (Protocol) – allow_pickle (bool) – Return type Model classmethod parse_obj(obj: Any) → Model¶ Parameters obj (Any) – Return type Model
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.gradient_ai.GradientEmbeddings.html
aa21f048efe7-4
Parameters obj (Any) – Return type Model classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ Parameters b (Union[str, bytes]) – content_type (unicode) – encoding (unicode) – proto (Protocol) – allow_pickle (bool) – Return type Model classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ Parameters by_alias (bool) – ref_template (unicode) – Return type DictStrAny classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ Parameters by_alias (bool) – ref_template (unicode) – dumps_kwargs (Any) – Return type unicode classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. Parameters localns (Any) – Return type None classmethod validate(value: Any) → Model¶ Parameters value (Any) – Return type Model Examples using GradientEmbeddings¶ Gradient
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.gradient_ai.GradientEmbeddings.html
a523ed8f43fb-0
langchain_community.embeddings.edenai.EdenAiEmbeddings¶ class langchain_community.embeddings.edenai.EdenAiEmbeddings[source]¶ Bases: BaseModel, Embeddings EdenAI embedding. environment variable EDENAI_API_KEY set with your API key, or pass it as a named parameter. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param edenai_api_key: Optional[SecretStr] = None¶ EdenAI API Token Constraints type = string writeOnly = True format = password param model: Optional[str] = None¶ model name for above provider (eg: ‘gpt-3.5-turbo-instruct’ for openai) available models are shown on https://docs.edenai.co/ under ‘available providers’ param provider: str = 'openai'¶ embedding provider to use (eg: openai,google etc.) async aembed_documents(texts: List[str]) → List[List[float]]¶ Asynchronous Embed search docs. Parameters texts (List[str]) – Return type List[List[float]] async aembed_query(text: str) → List[float]¶ Asynchronous Embed query text. Parameters text (str) – Return type List[float] classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values Parameters _fields_set (Optional[SetStr]) – values (Any) – Return type Model
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.edenai.EdenAiEmbeddings.html
a523ed8f43fb-1
values (Any) – Return type Model copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep (bool) – set to True to make a deep copy of the model self (Model) – Returns new model instance Return type Model dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶ Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) –
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.edenai.EdenAiEmbeddings.html
a523ed8f43fb-2
exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – Return type DictStrAny embed_documents(texts: List[str]) → List[List[float]][source]¶ Embed a list of documents using EdenAI. Parameters texts (List[str]) – The list of texts to embed. Returns List of embeddings, one for each text. Return type List[List[float]] embed_query(text: str) → List[float][source]¶ Embed a query using EdenAI. Parameters text (str) – The text to embed. Returns Embeddings for the text. Return type List[float] classmethod from_orm(obj: Any) → Model¶ Parameters obj (Any) – Return type Model static get_user_agent() → str[source]¶ Return type str json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) –
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.edenai.EdenAiEmbeddings.html
a523ed8f43fb-3
by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – encoder (Optional[Callable[[Any], Any]]) – models_as_dict (bool) – dumps_kwargs (Any) – Return type unicode classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ Parameters path (Union[str, Path]) – content_type (unicode) – encoding (unicode) – proto (Protocol) – allow_pickle (bool) – Return type Model classmethod parse_obj(obj: Any) → Model¶ Parameters obj (Any) – Return type Model classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ Parameters b (Union[str, bytes]) – content_type (unicode) – encoding (unicode) – proto (Protocol) – allow_pickle (bool) – Return type Model classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ Parameters by_alias (bool) – ref_template (unicode) – Return type DictStrAny classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ Parameters by_alias (bool) – ref_template (unicode) – dumps_kwargs (Any) – Return type unicode
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.edenai.EdenAiEmbeddings.html
a523ed8f43fb-4
dumps_kwargs (Any) – Return type unicode classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. Parameters localns (Any) – Return type None classmethod validate(value: Any) → Model¶ Parameters value (Any) – Return type Model Examples using EdenAiEmbeddings¶ EDEN AI Eden AI
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.edenai.EdenAiEmbeddings.html
6f8bbbc702b3-0
langchain_aws.embeddings.bedrock.BedrockEmbeddings¶ class langchain_aws.embeddings.bedrock.BedrockEmbeddings[source]¶ Bases: BaseModel, Embeddings Bedrock embedding models. To authenticate, the AWS client uses the following methods to automatically load credentials: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html If a specific credential profile should be used, you must pass the name of the profile from the ~/.aws/credentials file that is to be used. Make sure the credentials / roles used have the required policies to access the Bedrock service. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param client: Any = None¶ Bedrock client. param credentials_profile_name: Optional[str] = None¶ The name of the profile in the ~/.aws/credentials or ~/.aws/config files, which has either access keys or role information specified. If not specified, the default credential profile or, if on an EC2 instance, credentials from IMDS will be used. See: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html param endpoint_url: Optional[str] = None¶ Needed if you don’t want to default to us-east-1 endpoint param model_id: str = 'amazon.titan-embed-text-v1'¶ Id of the model to call, e.g., amazon.titan-embed-text-v1, this is equivalent to the modelId property in the list-foundation-models api param model_kwargs: Optional[Dict] = None¶ Keyword arguments to pass to the model. param normalize: bool = False¶ Whether the embeddings should be normalized to unit vectors param region_name: Optional[str] = None¶
https://api.python.langchain.com/en/latest/embeddings/langchain_aws.embeddings.bedrock.BedrockEmbeddings.html
6f8bbbc702b3-1
param region_name: Optional[str] = None¶ The aws region e.g., us-west-2. Fallsback to AWS_DEFAULT_REGION env variable or region specified in ~/.aws/config in case it is not provided here. async aembed_documents(texts: List[str]) → List[List[float]][source]¶ Asynchronous compute doc embeddings using a Bedrock model. Parameters texts (List[str]) – The list of texts to embed Returns List of embeddings, one for each text. Return type List[List[float]] async aembed_query(text: str) → List[float][source]¶ Asynchronous compute query embeddings using a Bedrock model. Parameters text (str) – The text to embed. Returns Embeddings for the text. Return type List[float] classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values Parameters _fields_set (Optional[SetStr]) – values (Any) – Return type Model copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model
https://api.python.langchain.com/en/latest/embeddings/langchain_aws.embeddings.bedrock.BedrockEmbeddings.html
6f8bbbc702b3-2
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep (bool) – set to True to make a deep copy of the model self (Model) – Returns new model instance Return type Model dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶ Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – Return type DictStrAny embed_documents(texts: List[str], dim: int = 1024, norm: bool = True) → List[List[float]][source]¶ Compute doc embeddings using a Bedrock model. Parameters texts (List[str]) – The list of texts to embed dim (int) – norm (bool) – Returns List of embeddings, one for each text. Return type List[List[float]]
https://api.python.langchain.com/en/latest/embeddings/langchain_aws.embeddings.bedrock.BedrockEmbeddings.html
6f8bbbc702b3-3
Returns List of embeddings, one for each text. Return type List[List[float]] embed_query(text: str, dim: int = 1024, norm: bool = True) → List[float][source]¶ Compute query embeddings using a Bedrock model. Parameters text (str) – The text to embed. dim (int) – norm (bool) – Returns Embeddings for the text. Return type List[float] classmethod from_orm(obj: Any) → Model¶ Parameters obj (Any) – Return type Model json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – encoder (Optional[Callable[[Any], Any]]) – models_as_dict (bool) – dumps_kwargs (Any) – Return type unicode
https://api.python.langchain.com/en/latest/embeddings/langchain_aws.embeddings.bedrock.BedrockEmbeddings.html
6f8bbbc702b3-4
dumps_kwargs (Any) – Return type unicode classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ Parameters path (Union[str, Path]) – content_type (unicode) – encoding (unicode) – proto (Protocol) – allow_pickle (bool) – Return type Model classmethod parse_obj(obj: Any) → Model¶ Parameters obj (Any) – Return type Model classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ Parameters b (Union[str, bytes]) – content_type (unicode) – encoding (unicode) – proto (Protocol) – allow_pickle (bool) – Return type Model classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ Parameters by_alias (bool) – ref_template (unicode) – Return type DictStrAny classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ Parameters by_alias (bool) – ref_template (unicode) – dumps_kwargs (Any) – Return type unicode classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. Parameters localns (Any) – Return type None classmethod validate(value: Any) → Model¶ Parameters
https://api.python.langchain.com/en/latest/embeddings/langchain_aws.embeddings.bedrock.BedrockEmbeddings.html
6f8bbbc702b3-5
Return type None classmethod validate(value: Any) → Model¶ Parameters value (Any) – Return type Model Examples using BedrockEmbeddings¶ AWS Bedrock
https://api.python.langchain.com/en/latest/embeddings/langchain_aws.embeddings.bedrock.BedrockEmbeddings.html
38b789a896c9-0
langchain_fireworks.embeddings.FireworksEmbeddings¶ class langchain_fireworks.embeddings.FireworksEmbeddings[source]¶ Bases: BaseModel, Embeddings FireworksEmbeddings embedding model. Example from langchain_fireworks import FireworksEmbeddings model = FireworksEmbeddings( model='nomic-ai/nomic-embed-text-v1.5' ) Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param fireworks_api_key: SecretStr = SecretStr('')¶ Constraints type = string writeOnly = True format = password param model: str = 'nomic-ai/nomic-embed-text-v1.5'¶ async aembed_documents(texts: List[str]) → List[List[float]]¶ Asynchronous Embed search docs. Parameters texts (List[str]) – Return type List[List[float]] async aembed_query(text: str) → List[float]¶ Asynchronous Embed query text. Parameters text (str) – Return type List[float] classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values Parameters _fields_set (Optional[SetStr]) – values (Any) – Return type Model
https://api.python.langchain.com/en/latest/embeddings/langchain_fireworks.embeddings.FireworksEmbeddings.html
38b789a896c9-1
values (Any) – Return type Model copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep (bool) – set to True to make a deep copy of the model self (Model) – Returns new model instance Return type Model dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶ Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) –
https://api.python.langchain.com/en/latest/embeddings/langchain_fireworks.embeddings.FireworksEmbeddings.html
38b789a896c9-2
exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – Return type DictStrAny embed_documents(texts: List[str]) → List[List[float]][source]¶ Embed search docs. Parameters texts (List[str]) – Return type List[List[float]] embed_query(text: str) → List[float][source]¶ Embed query text. Parameters text (str) – Return type List[float] classmethod from_orm(obj: Any) → Model¶ Parameters obj (Any) – Return type Model json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – encoder (Optional[Callable[[Any], Any]]) – models_as_dict (bool) – dumps_kwargs (Any) – Return type unicode
https://api.python.langchain.com/en/latest/embeddings/langchain_fireworks.embeddings.FireworksEmbeddings.html
38b789a896c9-3
dumps_kwargs (Any) – Return type unicode classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ Parameters path (Union[str, Path]) – content_type (unicode) – encoding (unicode) – proto (Protocol) – allow_pickle (bool) – Return type Model classmethod parse_obj(obj: Any) → Model¶ Parameters obj (Any) – Return type Model classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ Parameters b (Union[str, bytes]) – content_type (unicode) – encoding (unicode) – proto (Protocol) – allow_pickle (bool) – Return type Model classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ Parameters by_alias (bool) – ref_template (unicode) – Return type DictStrAny classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ Parameters by_alias (bool) – ref_template (unicode) – dumps_kwargs (Any) – Return type unicode classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. Parameters localns (Any) – Return type None classmethod validate(value: Any) → Model¶ Parameters
https://api.python.langchain.com/en/latest/embeddings/langchain_fireworks.embeddings.FireworksEmbeddings.html
38b789a896c9-4
Return type None classmethod validate(value: Any) → Model¶ Parameters value (Any) – Return type Model Examples using FireworksEmbeddings¶ FireworksEmbeddings
https://api.python.langchain.com/en/latest/embeddings/langchain_fireworks.embeddings.FireworksEmbeddings.html
ad45ba992eac-0
langchain_community.embeddings.titan_takeoff.ReaderConfig¶ class langchain_community.embeddings.titan_takeoff.ReaderConfig[source]¶ Bases: BaseModel Configuration for the reader to be deployed in Takeoff. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param consumer_group: str = 'primary'¶ The consumer group to place the reader into param device: Device = Device.cuda¶ The device to use for inference, cuda or cpu param model_name: str [Required]¶ The name of the model to use classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values Parameters _fields_set (Optional[SetStr]) – values (Any) – Return type Model copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.titan_takeoff.ReaderConfig.html
ad45ba992eac-1
update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep (bool) – set to True to make a deep copy of the model self (Model) – Returns new model instance Return type Model dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶ Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – Return type DictStrAny classmethod from_orm(obj: Any) → Model¶ Parameters obj (Any) – Return type Model
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.titan_takeoff.ReaderConfig.html
ad45ba992eac-2
Parameters obj (Any) – Return type Model json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – encoder (Optional[Callable[[Any], Any]]) – models_as_dict (bool) – dumps_kwargs (Any) – Return type unicode classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ Parameters path (Union[str, Path]) – content_type (unicode) – encoding (unicode) – proto (Protocol) – allow_pickle (bool) – Return type Model classmethod parse_obj(obj: Any) → Model¶ Parameters obj (Any) – Return type Model
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.titan_takeoff.ReaderConfig.html
ad45ba992eac-3
Parameters obj (Any) – Return type Model classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ Parameters b (Union[str, bytes]) – content_type (unicode) – encoding (unicode) – proto (Protocol) – allow_pickle (bool) – Return type Model classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ Parameters by_alias (bool) – ref_template (unicode) – Return type DictStrAny classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ Parameters by_alias (bool) – ref_template (unicode) – dumps_kwargs (Any) – Return type unicode classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. Parameters localns (Any) – Return type None classmethod validate(value: Any) → Model¶ Parameters value (Any) – Return type Model
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.titan_takeoff.ReaderConfig.html
22ecd8cc6724-0
langchain_community.embeddings.openai.async_embed_with_retry¶ async langchain_community.embeddings.openai.async_embed_with_retry(embeddings: OpenAIEmbeddings, **kwargs: Any) → Any[source]¶ Use tenacity to retry the embedding call. Parameters embeddings (OpenAIEmbeddings) – kwargs (Any) – Return type Any
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.openai.async_embed_with_retry.html
2ee4503386a3-0
langchain_community.embeddings.llamafile.LlamafileEmbeddings¶ class langchain_community.embeddings.llamafile.LlamafileEmbeddings[source]¶ Bases: BaseModel, Embeddings Llamafile lets you distribute and run large language models with a single file. To get started, see: https://github.com/Mozilla-Ocho/llamafile To use this class, you will need to first: Download a llamafile. Make the downloaded file executable: chmod +x path/to/model.llamafile Start the llamafile in server mode with embeddings enabled: ./path/to/model.llamafile –server –nobrowser –embedding Example from langchain_community.embeddings import LlamafileEmbeddings embedder = LlamafileEmbeddings() doc_embeddings = embedder.embed_documents( [ "Alpha is the first letter of the Greek alphabet", "Beta is the second letter of the Greek alphabet", ] ) query_embedding = embedder.embed_query( "What is the second letter of the Greek alphabet" ) Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param base_url: str = 'http://localhost:8080'¶ Base url where the llamafile server is listening. param request_timeout: Optional[int] = None¶ Timeout for server requests async aembed_documents(texts: List[str]) → List[List[float]]¶ Asynchronous Embed search docs. Parameters texts (List[str]) – Return type List[List[float]] async aembed_query(text: str) → List[float]¶ Asynchronous Embed query text. Parameters text (str) – Return type List[float]
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.llamafile.LlamafileEmbeddings.html
2ee4503386a3-1
Parameters text (str) – Return type List[float] classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values Parameters _fields_set (Optional[SetStr]) – values (Any) – Return type Model copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep (bool) – set to True to make a deep copy of the model self (Model) – Returns new model instance Return type Model
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.llamafile.LlamafileEmbeddings.html
2ee4503386a3-2
self (Model) – Returns new model instance Return type Model dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶ Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – Return type DictStrAny embed_documents(texts: List[str]) → List[List[float]][source]¶ Embed documents using a llamafile server running at self.base_url. llamafile server should be started in a separate process before invoking this method. Parameters texts (List[str]) – The list of texts to embed. Returns List of embeddings, one for each text. Return type List[List[float]] embed_query(text: str) → List[float][source]¶ Embed a query using a llamafile server running at self.base_url. llamafile server should be started in a separate process before invoking this method. Parameters text (str) – The text to embed. Returns Embeddings for the text. Return type List[float] classmethod from_orm(obj: Any) → Model¶ Parameters obj (Any) – Return type Model
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.llamafile.LlamafileEmbeddings.html
2ee4503386a3-3
Parameters obj (Any) – Return type Model json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – encoder (Optional[Callable[[Any], Any]]) – models_as_dict (bool) – dumps_kwargs (Any) – Return type unicode classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ Parameters path (Union[str, Path]) – content_type (unicode) – encoding (unicode) – proto (Protocol) – allow_pickle (bool) – Return type Model classmethod parse_obj(obj: Any) → Model¶ Parameters obj (Any) – Return type Model
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.llamafile.LlamafileEmbeddings.html
2ee4503386a3-4
Parameters obj (Any) – Return type Model classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ Parameters b (Union[str, bytes]) – content_type (unicode) – encoding (unicode) – proto (Protocol) – allow_pickle (bool) – Return type Model classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ Parameters by_alias (bool) – ref_template (unicode) – Return type DictStrAny classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ Parameters by_alias (bool) – ref_template (unicode) – dumps_kwargs (Any) – Return type unicode classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. Parameters localns (Any) – Return type None classmethod validate(value: Any) → Model¶ Parameters value (Any) – Return type Model Examples using LlamafileEmbeddings¶ llamafile
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.llamafile.LlamafileEmbeddings.html
77bdaab20d14-0
langchain_community.embeddings.minimax.MiniMaxEmbeddings¶ class langchain_community.embeddings.minimax.MiniMaxEmbeddings[source]¶ Bases: BaseModel, Embeddings MiniMax’s embedding service. To use, you should have the environment variable MINIMAX_GROUP_ID and MINIMAX_API_KEY set with your API token, or pass it as a named parameter to the constructor. Example from langchain_community.embeddings import MiniMaxEmbeddings embeddings = MiniMaxEmbeddings() query_text = "This is a test query." query_result = embeddings.embed_query(query_text) document_text = "This is a test document." document_result = embeddings.embed_documents([document_text]) Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param embed_type_db: str = 'db'¶ For embed_documents param embed_type_query: str = 'query'¶ For embed_query param endpoint_url: str = 'https://api.minimax.chat/v1/embeddings'¶ Endpoint URL to use. param minimax_api_key: Optional[SecretStr] = None¶ API Key for MiniMax API. Constraints type = string writeOnly = True format = password param minimax_group_id: Optional[str] = None¶ Group ID for MiniMax API. param model: str = 'embo-01'¶ Embeddings model name to use. async aembed_documents(texts: List[str]) → List[List[float]]¶ Asynchronous Embed search docs. Parameters texts (List[str]) – Return type List[List[float]] async aembed_query(text: str) → List[float]¶ Asynchronous Embed query text. Parameters text (str) – Return type List[float]
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.minimax.MiniMaxEmbeddings.html
77bdaab20d14-1
Parameters text (str) – Return type List[float] classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values Parameters _fields_set (Optional[SetStr]) – values (Any) – Return type Model copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep (bool) – set to True to make a deep copy of the model self (Model) – Returns new model instance Return type Model
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.minimax.MiniMaxEmbeddings.html
77bdaab20d14-2
self (Model) – Returns new model instance Return type Model dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶ Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – Return type DictStrAny embed(texts: List[str], embed_type: str) → List[List[float]][source]¶ Parameters texts (List[str]) – embed_type (str) – Return type List[List[float]] embed_documents(texts: List[str]) → List[List[float]][source]¶ Embed documents using a MiniMax embedding endpoint. Parameters texts (List[str]) – The list of texts to embed. Returns List of embeddings, one for each text. Return type List[List[float]] embed_query(text: str) → List[float][source]¶ Embed a query using a MiniMax embedding endpoint. Parameters text (str) – The text to embed. Returns Embeddings for the text. Return type List[float] classmethod from_orm(obj: Any) → Model¶ Parameters obj (Any) – Return type Model
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.minimax.MiniMaxEmbeddings.html
77bdaab20d14-3
Parameters obj (Any) – Return type Model json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – encoder (Optional[Callable[[Any], Any]]) – models_as_dict (bool) – dumps_kwargs (Any) – Return type unicode classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ Parameters path (Union[str, Path]) – content_type (unicode) – encoding (unicode) – proto (Protocol) – allow_pickle (bool) – Return type Model classmethod parse_obj(obj: Any) → Model¶ Parameters obj (Any) – Return type Model
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.minimax.MiniMaxEmbeddings.html
77bdaab20d14-4
Parameters obj (Any) – Return type Model classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ Parameters b (Union[str, bytes]) – content_type (unicode) – encoding (unicode) – proto (Protocol) – allow_pickle (bool) – Return type Model classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ Parameters by_alias (bool) – ref_template (unicode) – Return type DictStrAny classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ Parameters by_alias (bool) – ref_template (unicode) – dumps_kwargs (Any) – Return type unicode classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. Parameters localns (Any) – Return type None classmethod validate(value: Any) → Model¶ Parameters value (Any) – Return type Model Examples using MiniMaxEmbeddings¶ MiniMax Minimax
https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.minimax.MiniMaxEmbeddings.html
2fa34dc1d190-0
langchain_text_splitters.sentence_transformers.SentenceTransformersTokenTextSplitter¶ class langchain_text_splitters.sentence_transformers.SentenceTransformersTokenTextSplitter(chunk_overlap: int = 50, model_name: str = 'sentence-transformers/all-mpnet-base-v2', tokens_per_chunk: Optional[int] = None, **kwargs: Any)[source]¶ Splitting text to tokens using sentence model tokenizer. Create a new TextSplitter. Methods __init__([chunk_overlap, model_name, ...]) Create a new TextSplitter. atransform_documents(documents, **kwargs) Asynchronously transform a list of documents. count_tokens(*, text) create_documents(texts[, metadatas]) Create documents from a list of texts. from_huggingface_tokenizer(tokenizer, **kwargs) Text splitter that uses HuggingFace tokenizer to count length. from_tiktoken_encoder([encoding_name, ...]) Text splitter that uses tiktoken encoder to count length. split_documents(documents) Split documents. split_text(text) Split text into multiple components. transform_documents(documents, **kwargs) Transform sequence of documents by splitting them. Parameters chunk_overlap (int) – model_name (str) – tokens_per_chunk (Optional[int]) – kwargs (Any) – __init__(chunk_overlap: int = 50, model_name: str = 'sentence-transformers/all-mpnet-base-v2', tokens_per_chunk: Optional[int] = None, **kwargs: Any) → None[source]¶ Create a new TextSplitter. Parameters chunk_overlap (int) – model_name (str) – tokens_per_chunk (Optional[int]) – kwargs (Any) – Return type None
https://api.python.langchain.com/en/latest/sentence_transformers/langchain_text_splitters.sentence_transformers.SentenceTransformersTokenTextSplitter.html
2fa34dc1d190-1
kwargs (Any) – Return type None async atransform_documents(documents: Sequence[Document], **kwargs: Any) → Sequence[Document]¶ Asynchronously transform a list of documents. Parameters documents (Sequence[Document]) – A sequence of Documents to be transformed. kwargs (Any) – Returns A list of transformed Documents. Return type Sequence[Document] count_tokens(*, text: str) → int[source]¶ Parameters text (str) – Return type int create_documents(texts: List[str], metadatas: Optional[List[dict]] = None) → List[Document]¶ Create documents from a list of texts. Parameters texts (List[str]) – metadatas (Optional[List[dict]]) – Return type List[Document] classmethod from_huggingface_tokenizer(tokenizer: Any, **kwargs: Any) → TextSplitter¶ Text splitter that uses HuggingFace tokenizer to count length. Parameters tokenizer (Any) – kwargs (Any) – Return type TextSplitter classmethod from_tiktoken_encoder(encoding_name: str = 'gpt2', model_name: Optional[str] = None, allowed_special: Union[Literal['all'], AbstractSet[str]] = {}, disallowed_special: Union[Literal['all'], Collection[str]] = 'all', **kwargs: Any) → TS¶ Text splitter that uses tiktoken encoder to count length. Parameters encoding_name (str) – model_name (Optional[str]) – allowed_special (Union[Literal['all'], ~typing.AbstractSet[str]]) – disallowed_special (Union[Literal['all'], ~typing.Collection[str]]) – kwargs (Any) – Return type TS split_documents(documents: Iterable[Document]) → List[Document]¶
https://api.python.langchain.com/en/latest/sentence_transformers/langchain_text_splitters.sentence_transformers.SentenceTransformersTokenTextSplitter.html
2fa34dc1d190-2
TS split_documents(documents: Iterable[Document]) → List[Document]¶ Split documents. Parameters documents (Iterable[Document]) – Return type List[Document] split_text(text: str) → List[str][source]¶ Split text into multiple components. Parameters text (str) – Return type List[str] transform_documents(documents: Sequence[Document], **kwargs: Any) → Sequence[Document]¶ Transform sequence of documents by splitting them. Parameters documents (Sequence[Document]) – kwargs (Any) – Return type Sequence[Document] Examples using SentenceTransformersTokenTextSplitter¶ How to split text by tokens
https://api.python.langchain.com/en/latest/sentence_transformers/langchain_text_splitters.sentence_transformers.SentenceTransformersTokenTextSplitter.html
b84f4c839b44-0
langchain_community.cross_encoders.sagemaker_endpoint.SagemakerEndpointCrossEncoder¶ class langchain_community.cross_encoders.sagemaker_endpoint.SagemakerEndpointCrossEncoder[source]¶ Bases: BaseModel, BaseCrossEncoder SageMaker Inference CrossEncoder endpoint. To use, you must supply the endpoint name from your deployed Sagemaker model & the region where it is deployed. To authenticate, the AWS client uses the following methods to automatically load credentials: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html If a specific credential profile should be used, you must pass the name of the profile from the ~/.aws/credentials file that is to be used. Make sure the credentials / roles used have the required policies to access the Sagemaker endpoint. See: https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param content_handler: CrossEncoderContentHandler = <langchain_community.cross_encoders.sagemaker_endpoint.CrossEncoderContentHandler object>¶ param credentials_profile_name: Optional[str] = None¶ The name of the profile in the ~/.aws/credentials or ~/.aws/config files, which has either access keys or role information specified. If not specified, the default credential profile or, if on an EC2 instance, credentials from IMDS will be used. See: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html param endpoint_kwargs: Optional[Dict] = None¶ Optional attributes passed to the invoke_endpoint function. See `boto3`_. docs for more info. .. _boto3: <https://boto3.amazonaws.com/v1/documentation/api/latest/index.html>
https://api.python.langchain.com/en/latest/cross_encoders/langchain_community.cross_encoders.sagemaker_endpoint.SagemakerEndpointCrossEncoder.html
b84f4c839b44-1
param endpoint_name: str = ''¶ The name of the endpoint from the deployed Sagemaker model. Must be unique within an AWS Region. param model_kwargs: Optional[Dict] = None¶ Keyword arguments to pass to the model. param region_name: str = ''¶ The aws region where the Sagemaker model is deployed, eg. us-west-2. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values Parameters _fields_set (Optional[SetStr]) – values (Any) – Return type Model copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep (bool) – set to True to make a deep copy of the model self (Model) – Returns new model instance Return type Model
https://api.python.langchain.com/en/latest/cross_encoders/langchain_community.cross_encoders.sagemaker_endpoint.SagemakerEndpointCrossEncoder.html
b84f4c839b44-2
self (Model) – Returns new model instance Return type Model dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶ Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – Return type DictStrAny classmethod from_orm(obj: Any) → Model¶ Parameters obj (Any) – Return type Model json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). Parameters
https://api.python.langchain.com/en/latest/cross_encoders/langchain_community.cross_encoders.sagemaker_endpoint.SagemakerEndpointCrossEncoder.html
b84f4c839b44-3
Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – encoder (Optional[Callable[[Any], Any]]) – models_as_dict (bool) – dumps_kwargs (Any) – Return type unicode classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ Parameters path (Union[str, Path]) – content_type (unicode) – encoding (unicode) – proto (Protocol) – allow_pickle (bool) – Return type Model classmethod parse_obj(obj: Any) → Model¶ Parameters obj (Any) – Return type Model classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ Parameters b (Union[str, bytes]) – content_type (unicode) – encoding (unicode) – proto (Protocol) – allow_pickle (bool) – Return type Model classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ Parameters by_alias (bool) – ref_template (unicode) – Return type DictStrAny
https://api.python.langchain.com/en/latest/cross_encoders/langchain_community.cross_encoders.sagemaker_endpoint.SagemakerEndpointCrossEncoder.html